Software Development Audits
Due Dilligence for Software Development


Machine Learning to Improve Software Development – Grip Public Beta Release

Almost There!

After nine months of heated discussions, late night coding, morning mock ups, weekend writing and whisky nights (@kmile does not drink beer…) we are proud to announce our public beta.

We started Grip with the idea to improve software development through data analysis and now (finally) have a first version of our application live.

For a software application Grip can:

  • Automatically harvest and analyze live data from the application’s software development process from a limited number of sources (currently,  GitHub, JIRA, BitBucket, SonarQube and Pivotal Tracker)
  • Present our analysis of this data in web based dashboards
  • Set software development goals for a software application based on the data we collect
  • Make predictions on reaching these goals based on the data
  • Recommend what areas of the application’s software development process to improve in order to reach those goals

How does Grip work?

We identified three categories of Software Development Goals, based on the traditional project management triangle, that can be scored against.

These categories are:

Velocity – Represents the actual productivity of your development activities. It quantifies how much can your team(s) can accomplish per unit of time.

User Satisfaction – Gives you a sense for how happy your users/clients are with your offering. It helps you understand whether user issues are being addressed.

Costs – Costs, as used here, actually describes a concept more closely aligned to cost efficiency. That is, are you generating the highest possible output from your development resources?

The ability to reach these goals is dependent on what happens during development.

Deciding how to best reach these goals is now often done based on the experience of the people creating the software and while valuable this too often fails.

We set out to create a more data driven approach, where we use live data generated during development to make predictions about the outcome, the software product in use.

So after a few conversations we came up with the following sketch:


The input variables are based on what happens in a software development process before your application is in production. We measure elements of the people that make it, the process by which it is made, the technologies used and the code the team produces.  The output variables are based on the goals described above.

Besides collecting the data from a software development process we also decided to use machine learning and simulations to find out about relations between inputs and desirable outputs.

Our Approach: Simulation and Machine Learning for Software Development


Our Approach

Not there yet

We are very proud on what we have achieved but also know we are not there yet.

We still have very limited data so our predictions and recommendations can be inaccurate, our models are rudimentary and we still need to define better goals. But we hope you like what we have so far.

One of the reasons that we released our public beta is that we want feedback. We want to better understand what software development goals are important and having more data will improve our analysis.

If you are interested feel free to sign up for our beta and we will talk to you about connecting as soon as we can.

You can also have a look at our development though our own grip here:

 Things we’re Working On:

  • Improved UX
  • Better Goal Representation
  • More connected information sources
  • More capabilities for managing users of your grip
  • Automated onboarding
  • And of course more accurate predictions and recommendations
Jan Princen