Intro & Motivation
Participating in Codegeist every year is a tradition that we hold in high esteem at our team, and we deeply admire Atlassian for carrying on this tradition. This year was no different, and we were super excited to hear about this year’s theme and get to work.
Initially, we had planned to create a Jira or Confluence application, and we brainstormed half a dozen ideas, including a support bot and an AI assistant for Confluence tables, among others. It took us at least ten days and a hundred cups of coffee to decide and agree on what exactly we would do and how.
However, when the Bitbucket Cloud team announced support for Forge, we abandoned those ideas and immediately shifted our focus to it. With our extensive experience in developing Atlassian Connect applications for Bitbucket Cloud, we were eager to explore the possibilities that Forge for Bitbucket had to offer.
The Problem
As the developers of Awesome Graphs for Bitbucket, our team understands the importance of providing valuable insights to not just developers, but also development managers and directors. These individuals often visit Bitbucket to gather information about their team's activities over the past month. It is common for bosses to request this information via email, and development managers may use it for monthly synchronizations or planning.
The process of collecting this information, however, can be laborious and time-consuming. It requires opening a list of commits and pull requests, scrolling through a large amount of data, and aggregating it while ensuring that nothing important is missed. To address this challenge, we have developed a solution that leverages the power of Generative AI to automate the process.
The Solution
Team Pulse for Bitbucket is an AI-powered application that analyzes commits and pull requests and translates technical data into an accessible and comprehensive overview of the team's monthly progress in human-readable form.
Our solution streamlines the information-gathering process, freeing up valuable time for development teams and their managers to focus on other important tasks. With our solution, you can be confident that you are receiving accurate and relevant data that can help you make informed decisions about your team's activities.
Activity page
In practical terms, the application adds a dedicated Activity page on the repository level, which displays an AI-generated executive summary. The summary provides a comprehensive breakdown of all the activities carried out in the last month, categorized according to the type of activity, such as feature implementation, code refactoring, bug fixing, and more. The categories will depend on the actual work done by your team.
Monthly activity widget
The application also adds a widget to a repository page that provides repository activity metrics. This widget displays the number of open, merged, and declined pull requests, along with the total number of commits for the last 30 days. Additionally, it shows the percentage changes relative to the previous period, making it easy to track the progress.
This app is designed to promote transparency to the team work and ensure that development practices are optimized for maximum efficiency and effectiveness. With the help of the app, development managers can streamline their workflow, monitor progress, and proactively address areas for improvement. It is an essential tool for managers seeking to maintain a competitive edge in the rapidly evolving field of software development.
Challenges we’ve encountered
Although the solution turned out to be relatively straightforward, we encountered many obstacles along the way.
The first. Choosing the right LLM model provider for text and code generation was the first challenge. In our case, we evaluated two providers – Open AI and Anthropic – and took a closer look at their respective GPT-4 and Claude 2 models.
Both tools offer excellent support for generating text and code, which can be incredibly useful for completing a variety of daily tasks, such as creating emails, writing essays, and more. However, there were some key differences between the two models that we needed to consider before making a decision.
For instance, while the GPT-4 model has slightly better performance in areas such as math, reasoning, and coding skills, it can only generate output in a limited number of languages. In contrast, Claude 2 produces safer output and is more affordable, which is a significant factor to consider given the inability to sell Forge apps for Bitbucket at the moment.
Additionally, Claude 2 has larger context windows, which allowed us to input up to 100k tokens, making it an excellent choice for completing text and code-based tasks.
Overall, after careful consideration, we chose Open AI's GPT-4 over Claude 2 model as, during the testing, it performed better at aggregating data from Bitbucket.
The second. We needed to determine the most effective basis to use for generating content of the report. We considered several options, including using commit messages, pull request names, pull request contents, or changes in the code itself.
Although changes to the code would have been a great option, we decided against it due to concerns about IP protection. We understand that businesses are still concerned (and so are we) about their IP being compromised, and we did not want to put their code at risk.
We also considered aggregating the content of pull requests as an option, which would have provided a lot of context for the AI to understand the change. However, we encountered a significant limitation in that GPT-4 can accept only up to 8k tokens as input. Unfortunately, the description of pull requests is usually very voluminous and would have exceeded this input limit. Therefore, we declined this option, even though it was good from a quality point of view.
Ultimately, we settled on generating content based on pull request names. This option gave us similar results to the previous option but bypassed the input size limitation. Pull request names are typically short and to the point, making them easier to work with, and we were still able to extract valuable information from them.
The third. We were pleased to see that the time limits for executing requests had been increased from 25 to 55 seconds. This was a welcome development, as it made the execution of requests less challenging.
Our team had been working on optimizing the process for some time and fine-tuning requests to the Bitbucket REST API. We worked on minimizing the amount of time it took to execute requests while ensuring that we were still able to obtain the desired results. Our efforts paid off, as we were able to come up with a solution that fit within the 55-second limit.
Feedback on Forge
We appreciate the many capabilities that Forge for Bitbucket provides to developers and are eager to see its continued development.
Through our work with Forge for Bitbucket, we have identified several areas where the platform could potentially benefit from further growth and development. Here are a few examples:
- While we have the option to embed at the repository level, we are currently limited by the available modules (extension points) provided by Forge for Bitbucket. However, if we were able to integrate at the project and workspace levels, we could significantly enhance the application's capabilities. This would enable us to provide a more comprehensive overview for larger pull request scenarios, which would be highly beneficial.
- Our application currently features a link in the lefthand sidebar which allows users to navigate to the main application page. The link text is generated using the following template:
/{workspaceSlug}/{repositorySlug}/forge/{forgeAppId}/{forgeAppModuleKey}
To retrieve the required{workspaceSlug}
and{repositorySlug}
, we need to make a request to the Bitbucket API on behalf of the user, specifically:
/2.0/repositories/${workspaceId}/${repositoryId}?fields=full_name
However, this request could have been avoided if the path to the repository had been initially present in the context. Furthermore, we have observed that generating a link using identifiers instead of slugs results in side menu items not being displayed by Bitbucket: - The Repository overview card module currently lacks an icon property and instead uses a standard light bulb icon. This can be inconvenient if multiple Forge apps are installed in Bitbucket that use this module.
- The Repository main menu page module is displayed without any icon and also does not have a property to indicate the icon in the menu item. This creates the impression that something is not functioning correctly, especially when compared to other menu items that do have icons. You can observe this problem in the module documentation.
- When calling bridge.invoke from CustomUI to Forge, the function receives a context without the
workspaceId
property, although this identifier is used in many of Bitbucket APIs. To make requests to the API, we get the workspace identifier from thecontext.installContext
property by removing part of the stringari:cloud:bitbucket::workspace/
from the value. It would be beneficial to haveworkspaceId
in the context - Currently, it is not possible to obtain a link for installing the Forge application. The application can only be installed using the
forge install
command. However, if a link could be generated for Bitbucket in the developer console, it would greatly simplify the distribution of the application.
Instead of Conclusion
We would like to extend our sincere appreciation to Atlassian for providing us with the chance to once again test our skills and demonstrate our capabilities.
And by the way, have you had a chance to watch our app's promo video at the top of this page? If not, check it out and tell us what you think in the comments. We would be grateful for your support and vote. Thank you!
Built With
- chatgpt
- forge
- react
- typescript
- webpack
Log in or sign up for Devpost to join the conversation.