The last time Hackerfall tried to access this page, it returned a not found error. A cached version of the page is below, or clickhereto continue anyway

Go Learn

Learning as we Go

Some rookie Go mistakes we made building Teamwork Desk, and what we learned from them

We love Go. Over the last year we have written nearly 200,000 lines of Go code in the numerous services that make up Teamwork Desk. We have built nearly a dozen small HTTP services that make up the product. Why Go? Go is a fast (very fast), statically-typed, compiled language, with a powerful concurrency model, garbage collection, superb standard library, no inheritance, legendary authors, multi-core support, and a great community - not to mention that for us writing web apps it has a goroutine-per-request set-up that avoids event loops and callback hell. It has become hugely popular for building systems and servers, particularly micro-services.

Like working with any new language or technology, we fumbled about with it for a bit while we were experimenting in the early days. Go really does have its own style and idioms, especially if you come from an OO language like Java or a scripting language like Python. So we made some mistakes :) and we'd like to share some of them here and what we learned. If you use Go in production, you will recognize all of these. If you are just starting to use Go, then hopefully you might find something here that helps.

1. Revel was not a good choice for us

Just starting with Go? Building a web server? You need a framework right? Well, you might think so. There are advantages to using an MVC framework - primarily the convention-over-configuration approach that gives a set project structure and convention, that can give consistency and lower the bar of entry across projects. What we found was that we preferred the power of configuration over the advantage of convention, especially as Go makes it so easy to write a web app with minimal fuss, and many of our web apps are small services. The nail in the coffin for us was the fact that it is simply not idiomatic. Revel was written from the point of view of trying to introduce a Play or Rails-like framework into Go, instead of using the power of Go and the stdlib and building from there. From the author;

Initially, this was just a fun project to see if I could replicate the magical Play! 1.x experience in much-less-magical Go

In fairness, going with an MVC framework in a new language made a lot of sense for us at the time because it removed the debate about structure and allowed a new team to build on something in a coherent way. Almost every web app I have every written pre-Go was with the help of some flavour of MVC framework. C#? ASP.NET MVC. Java? SpringMVC. PHP? Symfony. Python? CherryPy. Ruby? RoR. We realised over time that we did not need a framework in Go. The standard library HTTP package has what you need, and you typically then add in a multiplexer (like mux) for routing and a lib for middleware (like negroni) for handling things like auth and logging etc., and that's all you need. Go's HTTP package design makes it easy to do this. You also come to realise that some of the power of Go is in the toolchain and tools around Go, giving you a wide range of powerful commands that can run against your code. However in Revel, because of the project structure set out, and the fact there is no package main and func main() {} entry (idiomatic and necessary for some go commands), you cannot use these tools. In fact Revel comes with its own command package that mirrors some of the commands like run and build.

Using Revel;

We found that Revel just strayed too far from the idiomatic way of building Go, and we lost the power of some great parts of the go toolset.

2. Use Panics wisely

If you come from Java or C#, error handling in Go can take a little bit of getting used to. Go has multiple returns from functions so a very typical scenario is a function that returns something and also an error, which would be nil if everything worked okay (nil is the default value for reference types in Go).

func something() (thing string, err error) {  
    s := db.GetSomething()
    if s == "" {
        return s, errors.New("Nothing Found")
    }
    return s, nil
}

We ended up using panic, where really we wanted to create an error and let it be handled somewhere higher up the call stack.

s, err := something()  
    if err != nil {
    panic(err)
}

We just panicked, literally. An error?! OMG, run! But in Go, you come to realise that errors are values, they are completely natural and idiomatic part of calling functions and dealing with the response. A panic will bring down your app, kill it dead. Like a runtime exception dead. Why would you do that just because a function returned an error? Lesson learned. And pre 1.6 the stack dump for a panic dumped all running go routines, making it very difficult to find the original problem. You end up with lots of stuff to wade through that you don't need.

Even when you do have a genuine non-recoverable error, or you encounter a run-time panic, you probably still don't want to crash your entire web server, which could be in the middle of lots of other things (you use transactions for your db right?). So we learned to handle these panics, adding a filter in Revel, which recovers the panic and captures the stack trace which is printed to log file and sent to Sentry where we get alerted by email and in Teamwork Chat immediately. The API returns a 500 Internal Server Error to the frontend.

// PanicFilter wraps the action invocation in a protective defer blanket that
// recovers panics, logs everything, and returns 500.
func PanicFilter(rc *revel.Controller, fc []revel.Filter) {  
    defer func() {
        if err := recover(); err != nil {
            handleInvocationPanic(rc, err) // stack trace, logging. alerting            
        }
    }()
    fc[0](rc, fc[1:])
}

3. Be careful reading from Request.Body more than once

After reading from http.Request.Body, the Body is drained and subsequent reading from it will return []byte{} - an empty body. This is because when you read the bytes of an http.Request.Body the reader is at the end of the bytes, it would need to be reset to read again. However, http.Request.Body is an io.ReadWriter and does not have methods such as Peek or Seek available that would help. A way around this is to copy the Body into memory first, then setting the original back to it after reading. Expensive if you have very large requests. Definitely a gotcha and one that still catches us out every now and again!

Here's a short but complete program that shows this

package main

import (  
    "bytes"
    "fmt"
    "io/ioutil"
    "net/http"
)

func main() {  
    r := http.Request{}
    // Body is an io.ReadWriter so we wrap it up in a NopCloser to satisfy that interface
    r.Body = ioutil.NopCloser(bytes.NewBuffer([]byte("test")))

    s, _ := ioutil.ReadAll(r.Body)
    fmt.Println(string(s)) // prints "test"

    s, _ = ioutil.ReadAll(r.Body)
    fmt.Println(string(s)) // prints empty string! 
}

Here's the code to copy it off and set it back...if you remember :)

content, _ := ioutil.ReadAll(r.Body)  
// Replace the body with a new io.ReadCloser that yields the same bytes
r.Body = ioutil.NopCloser(bytes.NewBuffer(content))  
again, _ = ioutil.ReadAll(r.Body)  

You could create a little util function

func ReadNotDrain(r *http.Request) (content []byte, err error) {  
    content, err = ioutil.ReadAll(r.Body)
    r.Body = ioutil.NopCloser(bytes.NewBuffer(content)) 
    return
}

and call that instead of using something like ioutil.ReadAll

content, err := ReadNotDrain(&r)  

Of course now you have replaced r.Body.Close() with a no-op that does nothing when Close is called on the request.Body. This is the way the httputil.DumpRequest works.

4. There are some ever-improving libraries to help you write SQL

The core part of Teamwork Desk that serves up the web app to the customers deals with MySQL a lot. We don't use stored procedures, so our data layer in Go consisted of some serious SQL...and some of that code would win a Gold Medal in Olympics Gymastics as it built up a complex query. We started using Gorm and its chainable API to build up our SQL. You can still use raw SQL with Gorm and have it marshal the result to your struct. (Worth nothing that in fact, we find recently we are doing this more and more often, which may hint that we need to revisit how we are really using Gorm and make sure we are getting the best out of it, or if we need to look at more alternatives - not afraid to do that either!).

For some, ORM is a dirty word - people say you lose control, understanding and possibility to really optimize queries - all true. We really just use Gorm as a wrapper to build queries where we understand the output it will give us, not as a full blown ORM. It lets you build up a query using its chainable API, like below, and marshal the result to a struct. It has a huge amount of features that can take away some of the pain of hand-crafting SQL in your code. It does also support Preloading, Limits, Grouping, Associations, Raw SQL, Transactions and more. Worth looking at if you are writing SQL by hand right now in Go.

var customer Customer  
   query = db.
   Joins("inner join tickets on tickets.customersId = customers.id").
   Where("tickets.id = ?", e.Id).
   Where("tickets.state = ?", "active").
   Where("customers.state = ?", "Cork").
   Where("customers.isPaid = ?", false). 
   First(&customer)

5. Pointless pointers are pointless

This was specific to slices really. Passing a slice to a function? Well in Go, arrays are values so if you have a large array, you don't want to be making a copy of it every time it is passed around or assigned right? True. It can be expensive in terms of memory to pass an array around. But in Go, 99% of the time you are actually dealing with a slice, not an array. A slice can basically be thought of as describing some section of an array (often all of it), consisting of a pointer to the starting array element, the length of the slice and the capacity of the slice.

Each part of a slice only requires 8 bytes so it will never be more than 24 bytes no matter what the underlying array holds or how big that is.

We often passed a pointer to a slice to a function, under the misapprehension that we were saving memory.

t := getTickets() // e.g. returns []Tickets, a slice  
ft := filterTickets(&t)

func filterTickets(t *[]Tickets) []Tickets {}  

If we had a lot of data in t, we thought we had just prevented a large copy of data in memory, by passing it to filterTickets. Understanding slices as we do now, we can can happily just pass that slice by value without memory concerns.

t := getTickets() // []Tickets massive list of tickets, 20MB  
ft := filterTickets(t)

func filterTickets(t []Tickets) []Tickets {} // 24 bytes passed by value  

Of course, not passing by reference also means you avoid the possibility of mistakingly changing what the pointer points to, as a slice itself is a reference type.

6. Naked returns hurt readability and make your code harder to understand (in larger functions)

"Naked returns" is the name given in Go to returning from a function without explicitly stating what you are returning. Huh? In Go, you can have named return values e.g. func add(a, b int) (total int) {}. I can return from that function using just return instead of return total. Naked Returns can be useful and neat for small functions.

func findTickets() (tickets []Ticket, countActive int64, err error) {  
    tickets, countActive = db.GetTickets()
    if tickets == 0 {
        err = errors.New("no tickets found!")
    }
    return
}

It's pretty clear what is happening here. If no tickets are found then 0, 0, error will be returned. If tickets are found, then something like 120, 80, nil will be returned, depending on ticket count etc. The key point is if you have named returned values in signature, then you can just use return (a naked return) and it will return the values for each named return value in the state they are in when return is called.

However...we had (have...) some big functions. Too big. Like, stupid-big. Any naked returns in a function that is long enough that you have to scroll through, are a disaster for readability and subtle bugs. Especially if there are multiple return points too. Don't do it. Don't do either actually :) naked returns or big functions. Here is a made-up example;

func findTickets() (tickets []Ticket, countActive int64, err error) {  
    tickets, countActive := db.GetTickets()
    if tickets == 0 {
        err = errors.New("no tickets found!")
    } else {
        tickets += addClosed()
        // return, hmmm...okay, I might know what this is
        return 
    }
    .
    .
    .
    // lots more code
    .
    .
    .
    if countActive > 0 {
        countActive - closedToday()
        // have to scroll back up now just to be sure...
        return
    }
    .
    .
    .
    // Okay, by now I definitely can't remember what I was returning or what values they might have
    return
}

7. Be careful about scope and short-hand declaration

You can introduce subtle bugs due to scoping in Go when you declare variables with same name using shorthand := in different blocks, called shadowing.

func findTickets() (tickets []Ticket, countActive int64) {  
    tickets, countActive := db.GetTickets() // 10 tickets returned, 3 active
    if countActive > 0 {
        // oops, tickets redeclared and used just in this block
        tickets, err := removeClosed() // 6 tickets left after removing closed
        if err != nil {
            // Argh! We used the variables here for logging!, if we didn't we would
            // have received a compile-time error at least for unused variables.
            log.Printf("could not remove closed %s, ticket count %d", err.Error(), len(tickets))
        }
    }
    return // this will return 10 tickets o_O
}

The trick there is with := shorthand variable declaration and assignment. Normally := will only compile if you are declaring new variables on the left hand side. But it also works if any of the variables on the left-hand side are new. In our case above, err is new so you would expect tickets to just be overriden as it was already declared above in the function return params. But this is not the case because of the block scope - a new tickets variable is declared and assigned and loses it's scope once that block has finished. To fix this, just declare the variable err outside of the block and just use = instead of := then. A good editor (e.g. Emacs or Sublime, with Go plugins for linting your code, will pick up on this shadowing).

func findTickets() (tickets []Ticket, countActive int64) {  
    var err error
    tickets, countActive := db.GetTickets() // 10 tickets returned, 3 active
    if countActive > 0 {
        tickets, err = removeClosed() // 6 tickets left after removing closed
        if err != nil {
            log.Printf("could not remove closed %s, ticket count %d", err.Error(), len(tickets))
        }
    }
    return // this will return 6 tickets
}

8. Maps and random crashes

Maps are not safe to access concurrently. We had one situation where we had a map available for the lifetime of the app as package-level variable. This map was used for collecting stats for each controller in our app, and of course in Go each http request is its own goroutine. You can see where this is going - eventually different goroutines would attempt to access the map at the same time, be it for read or write. This would cause a panic and our app would crash (we use upstart scripts on Ubuntu to respawn the app when the process stopped, at least keeping it "up" so to speak). It appeared random to us which is always fun. Finding the cause of panics like this was also a little bit more cumbersome pre-1.6 as the stack dump included all running goroutines and that amounted to a lot of logs to sift through.

The Go team did consider making maps safe for concurrent access but decided against as would be unnecessary overhead for most common cases - a pragmatic approach which keeps things simple. From golang.org FAQ

After long discussion it was decided that the typical use of maps did not require safe access from multiple goroutines, and in those cases where it did, the map was probably part of some larger data structure or computation that was already synchronized. Therefore requiring that all map operations grab a mutex would slow down most programs and add safety to few. This was not an easy decision, however, since it means uncontrolled map access can crash the program

Our code looked something like this

package stats

var Requests map[*revel.Controller]*RequestLog  
var RequestLogs map[string]*PathLog  

And we changed it to use the sync package from the stdlib to embed a reader/writer mutex lock in a struct that also wrapped up our map. We added some helper Add and Get methods to this struct.

var Requests ConcurrentRequestLogMap

// init is run for each package when the app first runs
func init() {  
    Requests = ConcurrentRequestLogMap{items: make(map[interface{}]*RequestLog)}
}

type ConcurrentRequestLogMap struct {  
    sync.RWMutex // We embed the sync primitive, a reader/writer Mutex
    items map[interface{}]*RequestLog
}

func (m *ConcurrentRequestLogMap) Add(k interface{}, v *RequestLog) {  
    m.Lock() // Here we can take a write lock
    m.items[k] = v
    m.Unlock()
}

func (m *ConcurrentRequestLogMap) Get(k interface{}) (*RequestLog, bool) {  
    m.RLock() // And here we can take a read lock
    v, ok := m.items[k]
    m.RUnlock()

    return v, ok
}

No more crashes.

9. Vendor...by the beard of Zeus, vendor

Okay, this is hard to admit. We're caught, red-handed. Guilty as charged...woops. We deployed code to production without vendoring.

So just to give you an idea why that is bad, in case you didn't know. In Go, you get your dependencies by running go get ./... from the root of your project. This pulls them from HEAD on master, for each one. Obviously this is very bad, as unless you keep the exact versions of dependencies on your servers in your $GOPATH and never update them ever (and never rebuild or launch a new server), then breaking changes are inevitable and you lose control over what code you are running in production. In Go 1.4 we vendored using Godeps and their GOPATH trick. In 1.5, we used GO15VENDOREXPERIMENT environment variable. In 1.6, thankfully, finally, /vendor at the root of your project is recognised as the place to put your dependencies, no tools necessary. You can use one of the various vendoring tools to track what versions and make it easier to add/update dependencies (removing .git, updating manifest etc.).

Plenty learned, more to come

That is a small list of some of the basic mistakes we made early on, and things we learned from them. We are just a small team of 5 developers building Teamwork Desk and yet we have learned an incredible amount about Go over the last year, while shipping a huge amount of great features at a breakneck pace. You will see us attending various Go conferences this year including GopherCon in Denver, I will soon be talking about using Go at a local developer meet-up in Cork. We will continue to look to release useful open-source tools in Go, and contribute back to existing libraries. We have a modest offering of some small projects so far (listed below), and we have also had PRs accepted back into Stripe, Revel and several other open-source Go projects.

Continue reading on engineroom.teamwork.com