Ir al contenido principal

Entradas

Unique bucket names in Google Cloud Storage

One interesting thing about Google Cloud Storage buckets, that maybe not all people is aware of, is that their name is unique at Google Cloud Storage level (and not at project level, as most of us would assume). That means that a bucket name must be unique at global level, and that is a problem as some of the names we choose may already exist. The way Google Cloud recommends is creating the names of the buckets as subdomains of an owned domain. Google ensures that it will verify the domain property before creating the bucket. In that way nobody will be able to create buckets using our domain and, therefore, our buckets will become unique at Google Cloud level. For multisite projects, it is also a good idea to add the bucket location to the name, so equivalent buckets could coexist in different locations. Examples of that exposed would be: mybucket.mydomain.net for a global "mybucket" bucket associated to our domain "mydomain.net" mybucke...
Entradas recientes

Go: Calculating the MD5 hash of a struct

In some scenarios, calculating the hash of a piece of data is useful, for instance when you only need to know that something in the dataset has changed, no matter what. Calculating a hash is moderately computing consuming but has some advantadges: Avoids checking the changes field by field of the dataset Allows storing only the data hash instead of the whole dataset In Golang, calculating MD5 hashes of structs can be achieved by using a code piece as this: import ( "bytes" "crypto/md5" "encoding/gob" "encoding/hex" "fmt" ) // Random struct, just for demo purposes type MySubStruct struct { Zaz *bool } type MyStruct struct { Foo int Bar []string Baz MySubStruct } func main() { // Hasheable struct myBool := false myStruct := MyStruct{ Foo: 67372388, Bar: []string{"b", "ar", ""}, Baz: MySubStruct{ Zaz: &myBool, }, } // Crea...

Concurrency in Go: Wait groups

This is the second article of a series analyzing Go concurrency, ie, parallel execution of different threads using Go. The series is composed of these articles: The Go concurrency foundation: goroutines . This one, in which synchronization between subprocesses, specifically by using Go waitgroups , is analyzed. Data exchange between subprocesses: Go channels Why synchronization between subprocesses The first article of the series exposed an example that clearly demonstrated why the subprocess synchronization is needed specifically in go: package main import ( "fmt" "time" ) // mySubprocess sleeps for a second func mySubprocess() { fmt.Println("Entering to mySubprocess") time.Sleep(1 * time.Second) fmt.Println("Exiting from mySubprocess") } // main program process func main() { fmt.Println("Calling mySubprocess from main()...") go mySubprocess() fmt.Println("mySubprocess finished!...

Concurrency in Go: Goroutines

This is the first article of a series analyzing Go concurrency, ie, parallel execution of different threads using Go. The series is composed of these articles: This one in which the Go concurrency foundation, the goroutines , are analyzed. Synchronization between subprocesses: Go waitgroups Data exchange between subprocesses: Go channels Goroutines In brief, a goroutine is a standard Go function that is processed in parallel (concurrently) to the main Go process. Let's see an example. The execution of the next code is quite predictible: The main Go process (that code in the main() function) will execute, then it will pause while the function mySubprocess() is running and then it will finish. Just one process, no concurrent execution. package main import ( "fmt" "time" ) // mySubprocess sleeps for a second func mySubprocess() { fmt.Println("Entering to mySubprocess") time.Sleep(1 * time.Second) fmt.Pr...

Install mongosh in Linux Debian

mongosh , the MongoDB Shell, is a JavaScript and Node.js REPL environment for interacting with MongoDB deployments in Atlas, locally, or on another remote host. It is a really productive tool that, by the time of writing this article, is not yet included in the Debian distributions. Does that mean that it cannot be installed on Debian? The response is yes, it can be installed on Debian , since MongoDB publishes the .deb packages for different Debian distribution on its own repository. The steps are described below. 1. Check the Debian distribution name This section deals with the steps needed to determine the Debian distribution name ( wheezy , stretch , jessie , buster , bullseye , bookworm ,...) of the system where mongosh is being installed. In case you know it, this step can be omitted. Install the latest version of lsb-release: sudo apt-get update sudo apt-get install lsb-release Once installed, run the command: lsb-release -a 2. Import ...

Defining Google Cloud IAM conditions for Secret Manager roles

Defining conditions for the permissions granted to a Google Cloud service account helps to enforce our security policy. By defining conditions we can, for instance, specify not just that a given account can access to secrets, but also to what secrets. This is really important since, in case an attacker could take control of a compute resource associated to an account with read access to secrets, he would literally be able to read all our secrets. However, if a condition is applied to that permission, just the secrets matching the condition would be exposed. In order to define an IAM permission condition, it is necessary to access to the Google Cloud IAM administration console and editing the principal (service account) whose permissions must be conditioned. Then by clicking the "ADD CONDITION" label of the role whose permissions must be conditioned, we access to the condition definition view, that contains both the Condition Builder that allows us defining c...

Debugging Google Cloud Functions with event signature in Node

In the article Debugging Google Cloud Functions in Node we exposed a first approach of what Google Functions Framework was and how could it be used. The article was mainly focused on debugging Google Cloud Function with HTTP signature, ie, those functions intended to be triggered by an HTTP request. There are however other ways of triggering the execution of a Google Cloud Function, as for instance Cloud Storage , Firestore or, more commonly, by forwarding a Pub/Sub message . This article covers how to debug Google Cloud functions intended to be triggered by a Pub/Sub message. Google Cloud functions with cloud event signature Let's assume we have a function that must be triggered by a Pub/Sub message. To do that, we need to perform two actions: Properly configuring Google Cloud to specify that our function will be triggered by a given kind of Pub/Sub message. This part is described in the official documentation and it's out of the scope of this article...