If Ⅰ were to make my own Go…

  • … Ⅰ would call it “Go further”
  • … it would have exceptions (or encourage the use of panic like exceptions).
  • named return values would not declare a variable.
  • … it would have a ternary operator.
  • … it would have Math.round().
  • … if a variable already exists in the scope, := would assign to it instead of shadowing it.
  • … it would be an error to call a function that returns something without assigning the results
  • … there would be non-blocking channel operations.
  • … it wouldn’t have the need for absolute import paths in the main package.
  • … copying a variable of a reference type (slice, map) wouldn’t be allowed.

 

Update:

Ⅰ had written this blog post a while ago, before Ⅰ found this call for Experience Reports. Below Ⅰ will elaborate a bit on the items from above to make this a suitable submission.

 

… Ⅰ would call it “Go further”

Just kidding. :)

 

… it would have exceptions (or encourage the use of panic like exceptions)
This will be the longest part of this article. It will give several reasons why Go would be much better with Exceptions.

 
1. Error handling obscures business logic

Look at this code, taken directly from an application of mine, just with a few features removed:

func() error {
  var b bytes.Buffer
  var n int
  var err error

  written := 0
  
  // Start the first file
  chunkNum := 1
  w := startNewFile(chunkNum)

  // Write header
  n = w.Write([]byte("[\n"))
  written += n

  for d := range documents {

    b.Reset()

    b.WriteString(`{ timestamp: `)
    b.WriteString(strconv.Itoa(d.Timestamp))
    b.WriteString(`, tags: `)
    b.Write([]byte(d.Tags))
    b.WriteString(`, frequencies: [`)
    
    for i, v := range d.Histogram {
      if i != 0 {
        _ = b.WriteString(",")
      }

      b.WriteString(`{value: `)
      b.WriteString(strconv.Itoa(v.Value))
      b.WriteString(`, frequency: `)
      b.WriteString(strconv.Itoa(v.Frequency))
      b.WriteString(`}`)
    }

    b.WriteString(`] }`)
    b.WriteString("\n")

    n = w.Write(b.Bytes())
    written += n

    // Switch to next file?
    if written > chunkSize {

      n, err = w.Write([]byte("]\n"))

      err = w.Close()

      chunkNum++
      w = startNewFile(chunkNum)

      written := 0

      n = w.Write([]byte("[\n"))
      written += n
    }

    // Terminate early?
    if contextIsCanceled(ctx) {
      return ctx.Err()
    }
  }

  // End last file
  n, err = w.Write([]byte("]\n"))
  
  w.Close()

  return nil
}

If you stare at that code for half a minute or so, you’ll realize what it does: It receives a bunch of documents on channel documents and writes them in a certain format into a files, while making sure all files are approximately chunkSize in size. The output will look something like this, but that’s not important right now:
[<br /><br /><br /><br /><br />
{timestamp: 1234, tags: something, frequencies: [ {value: 1, frequency: 3},{value: 2, frequency: 14} ]}<br /><br /><br /><br /><br /><br />
{timestamp: 1235, tags: something, frequencies: [ {value: 1, frequency: 7},{value: 2, frequency: 4} ]}<br /><br /><br /><br /><br /><br />
]

The fact that it takes me just half a minute to understand the code matters to me. If a client calls me on the phone and asks “My files are larger than chunkSize, why!?”, Ⅰ can give them an answer right away. Every piece of code is written just once, but read many times.

Now the code shown above unfortunately is not valid Go code. The error handling is completely missing. This would have to be wrapped in a try/catch block. Now, let’s look at the code with error handling:

func() error {
  var b bytes.Buffer
  var n int
  var err error

  written := 0

  // Start the first file
  chunkNum := 1
  w, err := startNewFile(chunkNum)
  if err != nil {
    return err
  }

  // Write header
  n, err = w.Write([]byte("[\n"))
  if err != nil {
    return err
  }
  written += n

  for d := range documents {

    b.Reset()

    _, err = b.WriteString(`{ timestamp: `)
    if err != nil {
      return err
    }
    _, err = b.WriteString(strconv.Itoa(d.Timestamp))
    if err != nil {
      return err
    }
    _, err = b.WriteString(`, tags: `)
    if err != nil {
      return err
    }
    _, err = b.Write([]byte(d.Tags))
    if err != nil {
      return err
    }
    _, err = b.WriteString(`, frequencies: [`)
    if err != nil {
      return err
    }

    for i, v := range d.Histogram {
      if i != 0 {
        _, err = b.WriteString(",")
        if err != nil {
          return err
        }
      }

      _, err = b.WriteString(`{value: `)
      if err != nil {
        return err
      }
      _, err = b.WriteString(strconv.Itoa(v.Bin))
      if err != nil {
        return err
      }
      _, err = b.WriteString(`, frequency: `)
      if err != nil {
        return err
      }
      _, err = b.WriteString(strconv.Itoa(v.Count))
      if err != nil {
        return err
      }
      _, err = b.WriteString(`}`)
      if err != nil {
        return err
      }
    }

    _, err = b.WriteString(`] }`)
    if err != nil {
      return err
    }

    n, err = w.Write(b.Bytes())
    if err != nil {
      return err
    }
    written += n

    // Switch to next file?
    if written > chunkSize {
      n, err = w.Write([]byte("]\n"))
      if err != nil {
        return err
      }

      err = w.Close()
      if err != nil {
        return err
      }

      chunkNum++
      w, err = startNewFile(*output, *split != 0, chunkNum)
      if err != nil {
        return err
      }

      written := 0

      n, err = w.Write([]byte("[\n"))
      if err != nil {
        return err
      }
      written += n
    }

    // Terminate early?
    if contextIsCanceled(ctx) {
      return ctx.Err()
    }
  }

  // End last file
  n, err = w.Write([]byte("]\n"))
  if err != nil {
    return err
  }

  err = w.Close()
  if err != nil {
    return err
  }

  return nil
}

Ⅰ wasn’t going to put up with this, which leads to this awkward exception handling:

func() (err error) {
	defer func() {
		e := recover()
		if e != nil {
			err = e.(error)
		}
	}()

	var b bytes.Buffer
	var n int

	written := 0

	// Start the first file
	chunkNum := 1
	w, err := startNewFile(chunkNum)
	panicOnError(err)

	// Write header
	n, err = w.Write([]byte("[\n"))
	panicOnError(err)
	written += n

	for d := range documents {

		b.Reset()

		writeStringOrPanic(b, `{ timestamp: `)
		writeStringOrPanic(b, strconv.Itoa(d.Timestamp))
		writeStringOrPanic(b, `, tags: `)
		writeOrPanic(b, []byte(d.Tags))
		writeStringOrPanic(b, `, frequencies: [`)

		for i, v := range d.Histogram {
			if i != 0 {
				writeStringOrPanic(b, ",")
			}

			writeStringOrPanic(b, `{value: `)
			writeStringOrPanic(b, strconv.Itoa(v.Bin))
			writeStringOrPanic(b, `, frequency: `)
			writeStringOrPanic(b, strconv.Itoa(v.Count))
			writeStringOrPanic(b, `}`)
		}

		writeStringOrPanic(b, `] }`)

		n, err = w.Write(b.Bytes())
		panicOnError(err)
		written += n

		// Switch to next file?
		if written > chunkSize {
			n, err = w.Write([]byte("]\n"))
			panicOnError(err)

			err = w.Close()
			panicOnError(err)

			chunkNum++
			w, err = startNewFile(*output, *split != 0, chunkNum)
			panicOnError(err)

			written := 0

			n, err = w.Write([]byte("[\n"))
			panicOnError(err)
			written += n
		}

		// Terminate early?
		if contextIsCanceled(ctx) {
			err = ctx.Err()
			return
		}
	}

	// End last file
	n, err = w.Write([]byte("]\n"))
	panicOnError(err)

	err = w.Close()
	panicOnError(err)

	return
}

This example is by no means an exception (no pun intended), to the contrary: Go is a language well suited for tooling, like converters, importers, exporters, request translators, etc. and in all those application you will find code similar to this example.

Ⅰ have a Gist that compares go-selenium with tebeka/selenium and sclevine/agouti. Note how go-selenium uses T.Fatalf to make the test concise and much easier to read.

 
2. Usually in one function/one code block the only possible course of action is to abort and return

One common argument for Go’s style of error handling is that it forces the developer to think about proper error handling, i.e. the correct way to handle a specific error.
This might be true for errors like strconv.Atoi where an error is really just a value. In all the other cases an error is cause by either wrong user input or because hardware or application are in a wrong state, like in the example above. Pretty much the only error that can happen is an I/O error because the hard disk is full. And in these cases there is usually only one correct action: Abort the whole request/process, return several levels up the callstack and output an error message.

 
3. Error passing code is hard to write and hard to test

All those additional lines of code that are necessary to pass the error all the way up to the first caller can contain additional bugs (happened to me a lot) and are really hard to test (try testing an I/O error).

 
4. You have to predict the future (or build a bad interface)

Look at net/http/cookiejar. cookiejar.New()returns (*Jar, error). Why can it return an error? It never actually returns an error, so Ⅰ guess someone included an error value, just in case future code can generate errors. So now Ⅰ have to wrap it in

func mustMakeCookiejar() *cookiejar.Jar {
	jar, err := cookiejar.New(&cookiejar.Options{PublicSuffixList: publicsuffix.List})
	if err != nil {
		panic(err)
	}
	return jar
}

to be able to use it in c := http.Client{Jar: mustMakeCookiejar()}.

5. It clutters otherwise chainable function calls

Last but not least it bloats the code if several function calls would otherwise be chainable (they often are):

try {
	result := strconv.Atoi(decodeJson(j).(map[string]item)["Items"][0].Value) + 2
} catch(err error) {
	printStackTrace(err)
}

vs.

	decoded, err := decodeJson(j)
	if err != nil {
		return err
	}
	m, ok := decoded.(map[string]item)
	if !ok {
		return errors.New("Decoded JSON was not an object, this should never happen")
	}
	items := m["Items"]
	if items == nil {
		return errors.New("Items was not an array, this should never happen")
	}
	if len(items) == 0 {
		return errors.New("Item missing")
	}
	result, err := strconv.Atoi(items[0].Value)
	if err != nil {
		return err
	}
	result += 2

 

named return values would not declare a variable.

Multiple return values are super useful. Naked returns not so much. Unfortunately documenting the return values is one step closer to a naked return.

func createImages() (*image.RGBA, *image.RGBA, string, error) {
}

Huh?

func createImages() (leftImage *image.RGBA, rightImage *image.RGBA, stats string, err error) {
}

Aha, now it’s clear what each parameter is. But now you also have four new variables in your scope. Should you use them? And then do a naked return? It obscures the data flow. But not using them is kind of a waste… aaah.

 

… it would have a ternary operator.

Really! Why not?

 

… it would have Math.round().

This one is obsolete, yeah!

(Why does it return float64, though?)

 

… if a variable already exists in the scope, := would assign to it instead of shadowing it.

Consider this code:

var value int

if mode == "parse" {
	value, err := strconv.Atoi(s)
	if err != nil {
		return err
	}
}
if mode == "constant" {
	value = 1
}

In the highlighted line two new variables are declared, local to the if block: value and err. For err this is what Ⅰ want, it didn’t exist before, that’s why Ⅰ used :=. But Ⅰ don’t need a new block-local value. If there would be no if, the same line would not declare a new value variable.

In almost 100 % of the cases the current behavior is not what Ⅰ want. If a variable already exists in the scope, := should ignore it. No matter if it was declared in the current scope or in an outermore scope.

 

… it would be an error to call a function that returns something without assigning the results

If f() returns three values, why is it an error to call a, b = f(), but calling just f() is ok? The latter should be an error, too.

This leads to very subtle errors like the incredibly common

func() {
    file := os.Create(filename)
    defer file.Close()
    
    ...
}

Thank you very much, you have just discarded the error value of Close().

 

… there would be non-blocking channel operations.

Non-blocking channel read/write is surprisingly common and currently it requires 4 lines of code:

select {
	case c <- v:
	default:
}

It would be nice to have a shorthand.

 

... it wouldn't have the need for absolute import paths in the main package.

For packages that are to be imported it might make sense, but when I'm writing an application, and I want to put it somewhere in my GOPATH (for example next to my other applications), and I want to put some of its files in a subpackage (to make some functions un-exported), I have to write import "github.com/AndreKR/myapplication/subpackage" even though it’s not even hosted on GitHub! That feels so wrong!

 

… copying a variable of a reference type (slice, map) wouldn’t be allowed.

Now that Ⅰ have accumulated some experience with Go, these things come natural to me, but Ⅰ remember (because Ⅰ wrote it down, partly on Stackoverflow) when Ⅰ was new to Go, Ⅰ was pretty confused by all the resources that claimed “there are no references in Go, there are no Objects in Go, everything is copied and pass-by-value, etc.”. And then the slice came along. And then the map.

For slices it wasn’t so bad. After all there was this and this article, explaining how they look like internally, so their behavior could be deduced from that. Unfortunately maps still seem to be “waiting for future posts”. :)

Ⅰ haven’t thought this through thoroughly (wow), but maybe it would be best if maps couldn’t be copied as part of a struct at all? Only pointers to maps?