GitHub today announced the general availability of Copilot Enterprise, a $39/month version of its code completion tool and developer-centric chatbot for large enterprises. Copilot Enterprise includes all the features of the existing Business plan, including IP coverage, but expands on it to add a number of important features for large teams. The highlight here is the ability to browse your organization's internal code and knowledge base. Copilot is also currently integrated with Microsoft's Bing search engine (currently in beta), and soon users will also be able to fine-tune Copilot's models based on their team's existing codebase. .
This allows a new developer on your team, for example, to ask Copilot how to deploy container images to the cloud and get answers specific to your organization's processes. After all, for many developers, the barrier to productivity when relocating companies isn't necessarily understanding the codebase, but understanding the various processes. However, Copilot clearly helps with code understanding as well.
Many teams now store their documentation in GitHub repositories, so it's relatively easy for Copilot to infer it. In fact, as GitHub CEO Thomas Dohmke told me, GitHub itself stores virtually all of its internal documentation on the service, and recently gave all employees access to these new features. Because of this, some people have started using it for non-engineering questions. For example, he started asking Copilot questions about vacation policies.
According to Dohmke, customers have been asking for these capabilities since Copilot's early days to see inside information. “A lot of what developers do in-house, at home or in open source, means that the organization has processes and specific libraries that it uses, and many of them have internal tools, systems, and dependencies. It’s different from doing. It doesn’t exist like that on the outside,” he pointed out.
Regarding the Bing integration, Dohmke said it's useful to ask Copilot about things that may have changed since the model was originally trained (think open source libraries and APIs). For now, the feature is only available in his Enterprise version, and Dohmke wouldn't say much about whether it will be introduced to other editions, but he said he hopes GitHub will bring this feature to other editions in the future. I wouldn't be surprised if they introduced it to the tier as well. , too.
One feature that is likely to remain an enterprise feature due to the associated costs is fine-tuning, which is expected to be released soon. “We let a company choose a set of repositories within his GitHub organization and fine-tune the model on those repositories,” he explained. “By abstracting away the complexity of generative AI and fine-tuning it away from our customers, we are enabling them to leverage their codebase to generate optimized models that can then be used within Copilot scenarios. He said this also means that the models cannot be as up-to-date as they can be with embeddings, skills, and agents (such as the new Bing agent). However, he claims that they are all complementary, and customers who have already tested the feature have seen significant improvements. This is especially true for teams that use codebases in less widely used languages like Python or JavaScript, or internal libraries that don't really exist outside of the organization.
In addition to talking about today's release, I also asked Dohmke for some high-level thoughts on where Copilot goes next. The answer is basically “more co-pilots in more places.” I think next year we'll see more and more focus on the end-to-end experience of putting CoPilot where you're already working, rather than creating a new destination and copying and pasting there. . We at GitHub are making Copilot available on his github. I think that's why I'm so excited about the opportunity that comes with publishing. ”
Regarding the underlying technology and its future evolution, Dohmke said that the autocomplete functionality is currently running on GPT 3.5 Turbo. Although GitHub did not migrate its models to GPT 4 due to latency requirements, Dohmke also said that the team has updated the models “more than six times” since Copilot Business launched.
At this point, it doesn't appear that GitHub will follow Google's model of differentiating price points based on the size of the model that enhances the experience. “Different use cases require different models. Different optimizations (latency, accuracy, quality of results, responsible AI) in each model version ensure that the output is ethical, compliant, and safe. It plays a big role in making sure that the code is of the highest quality and not producing lower quality code than the customer expects. We are on a path to using the best models for different parts of the Copilot experience “We will continue to move forward,” Dohmke said.