One interesting thing about Google Cloud Storage buckets, that maybe not all people is aware of, is that their name is unique at Google Cloud Storage level (and not at project level, as most of us would assume). That means that a bucket name must be unique at global level, and that is a problem as some of the names we choose may already exist. The way Google Cloud recommends is creating the names of the buckets as subdomains of an owned domain. Google ensures that it will verify the domain property before creating the bucket. In that way nobody will be able to create buckets using our domain and, therefore, our buckets will become unique at Google Cloud level. For multisite projects, it is also a good idea to add the bucket location to the name, so equivalent buckets could coexist in different locations. Examples of that exposed would be: mybucket.mydomain.net for a global "mybucket" bucket associated to our domain "mydomain.net" mybucke...
In some scenarios, calculating the hash of a piece of data is useful, for instance when you only need to know that something in the dataset has changed, no matter what. Calculating a hash is moderately computing consuming but has some advantadges: Avoids checking the changes field by field of the dataset Allows storing only the data hash instead of the whole dataset In Golang, calculating MD5 hashes of structs can be achieved by using a code piece as this: import ( "bytes" "crypto/md5" "encoding/gob" "encoding/hex" "fmt" ) // Random struct, just for demo purposes type MySubStruct struct { Zaz *bool } type MyStruct struct { Foo int Bar []string Baz MySubStruct } func main() { // Hasheable struct myBool := false myStruct := MyStruct{ Foo: 67372388, Bar: []string{"b", "ar", ""}, Baz: MySubStruct{ Zaz: &myBool, }, } // Crea...