Copilot Experience So Far


What my experience with Github Copilot has been so far.

After a month

I have been using copilot.vim for a month. I was skeptical to start. I wanted to see if it would be a worthwhile productivity tool for me. Or would it help me write bugs faster?

After a month, I think it's a net positive. It's a valuable tool that I'm willing to let my employer pay $10/month for, lol.

We'll need some more time to discover how many bugs I've created.

A little slow

This thing's just not as fast as an onboard language server. But it's also doing more work. Sometimes it completes large swaths of code. It doesn't just look for keywords within context.

Good at patterns

If I write a block of code and then start into another block that might follow the same pattern, it can infer and recommend the next iteration on that pattern. This seems especially truth with tests, which can be quite repetitive. For example, I had written a previous test suite with "empty", "unmatched" and "matched" cases. Then I started a new describe block, and it recommended the same pattern of 3 cases, filled in with reasonable test stubs:

describe('#formatSameAsUrls', () => {
  test('empty externalIds', () => {
    /* ... */
  })

  test('externalIds with unmatched sources', () => {
    /* ... */
  })

  test('externalIds with matched sources', () => {
    /* ... */
  })
})

It's not just code patterns that it knows. It's arbitrary patterns. For example, I edited a file where I had bookmarks saved. It knew how to format a new line, even generating the metadata for title and tags:

https://github.com/abi/screenshot-to-code "~Screenshot to Code - AI to generate code from screenshot" ai tools

Welcome back, del.icio.us.

Pulls in outside code

Of course, it's trained on gobs of code. Without other fetch examples in a codebase, it completed the function body that called fetch and dealt with the json response.

async function fetchTags() {
  const res = await fetch("https://cataas.com/api/tags");
  const tags = await res.json();
  return tags;
}

Where'd this come from? The model has apparently seen the word "fetch" a lot. Go 10 million JavaScript developers!

Learns from adjacent code

I was wondering if it pulls just the most popular code out of Github. At first, it started giving me code completions for a certain toBe matcher:

  expect(something()).toBe(true)

But I'm not cute. I just use toEqual. Let's keep it real. I corrected it and kept going. From then on, it started suggesting toEqual instead of toBe.

Connects English to code

I was using vim to take notes in an interview I was conducting. I wanted to record a web search that the candidate had made. I started writing the name of a class at the end of an English sentence, and Copilot knew from the description in English what the code should be:

Looking up syntax on bootstrap utility class for justify-content-center

Interrupts my thoughts

Sometimes, especially when writing an English sentence, I stop to consider my words and craft the sentence. (And sometimes I just keep typing without thinking!) Anyway, I said copilot is slow, but sometimes I'm slower. It recommends a completion to my sentence and the next two, to boot. I see the ghost text and consider it.

In English, often this text is not what I want, and it feels like a distraction, almost like a notification that has come up and splintered my thoughts. Or like a cute little daughter who comes in to inquire of me, "Guess what's behind my back" (happening now -- the answer was "nothing").

Can see potential usages

At one point, I extracted a function, and I was creating a new interface for arguments to that function. As I started writing it, Copilot detected (I assume) which arguments were used inside the component, and I got completions for fields in the interface.

interface ComponentProps {
  // completed this line, then the next
  slug: string

Works in natural order

Let's say I wanted this line of code:

const { t } = useTranslation()

If I was using a language server, and I wanted a completion, I'd have to start typing useTr... to get a completion on the function name. Then I could move inside the destructuring curlies and type t (bad example, because it's one letter) to get an autocomplete on the t field in the return value.

But in Copilot, I can just start typing const { t and it will complete the whole line, based on patterns in text.

Recognizes well-known problems

I was doing the "fizzbuzz" problem for a testing training, and I wrote the function by name, and the rest was filled in, clause by clause, in 3 completions:

export function fizzbuzz(num) {
  return num % 15 === 0
    ? "fizzbuzz"
    : num % 3 === 0
    ? "fizz"
    : num % 5 === 0
    ? "buzz"
    : num;
}

Obviously, this problem has been done before. It even knew how to generate all test cases in 4 completions.

There is also a chat interface with Copilot that can be used from VS Code (Unfortunately, no copilot.vim interface yet). I asked it with a chat-like prompt, "What would be a good kata after the fizzbuzz exercise?", and it suggested the Roman numeral kata. Then it generated the solution and test suite, including several refactorings, given certain prompts like "Rewrite the test suite using the test.each syntax". Amazing.

Prevents some learning

Well, the Roman numeral kata taught me that Copilot can do stuff. But I didn't do stuffl-- I never completed the kata. Lucikly I had already done that kata and could consider the recommendation and verify the answer.

I felt like I had missed out again when doing an exercism exercism while learning Elixir. Copilot gave me a lot of help -- too much. It was too fast. Maybe it helped with discoverability, like the way that an IDE can help discover language constructs, but it definitely hurt my learning. My fingers never got the chance to learn. My mind wasn't exercised enough.

defmodule Rules do
  def eat_ghost?(power_pellet_active?, touching_ghost?) do
    power_pellet_active? and touching_ghost?
  end

  def score?(touching_power_pellet?, touching_dot?) do
    touching_power_pellet? and touching_dot?
  end

  def lose?(power_pellet_active?, touching_ghost?) do
    power_pellet_active? and touching_ghost?
  end

  def win?(has_eaten_all_dots?, power_pellet_active?, touching_ghost?) do
    has_eaten_all_dots? and power_pellet_active? and touching_ghost?
  end
end

Well, there was a bit of an exercise thanks to Copilot: It recommended incorrect function bodies in a couple instances, so I got to hunt for bugs too.

Finally

And now, like all good eslint users, I can disable eslint as I go. And I don't have to remember all the codes for the rules. As soon as I start typing a comment, it knows!

/* eslint-disable-next-line no-console */

There will surely be more months and things to experience. Mostly, I've been impressed. Is Copilot impressed with me? When do androids discover feelings?