Trellis Score report
The Trellis Score Report calculates and displays the Trellis Scores by individual developers.
The trellis score can be further divided into various metric categories. Let's say a developer has the following values for these metrics:
Factor | Metric | Value | Benchmark | Weight |
---|---|---|---|---|
Quality | Percentage of Rework | 40% | Lower is better | 20% |
Percentage of Legacy Rework | 35% | Lower is better | ||
Impact | High-Impact Bugs Worked On Per Month | 1 | 2-3 | 15% |
High-Impact Stories Worked On Per Month | 3 | 4-6 | ||
Volume | Number of PRs per month | 4 | 5-7.5 | 25% |
Number of Commits per month | 8 | 10-15 | ||
Lines of Code per month | 100 | 125-185 | ||
Number of Bugs Worked On Per Month | 2 | 2-3 | ||
Number of Stories Worked On Per Month | 4 | 5-7 | ||
Number of Story Points Worked On Per Month | 18 | - | ||
Speed | Average Coding Days per Week | 2.5 | 3.2 | 10% |
Average PR Cycle Time | 9 days | < 7 days | ||
Average Time Spent Working On Issues | 6 days | 3-5 days | ||
Proficiency | Technical Breadth | 2 | 2-3 | 10% |
Repo Breadth | 1 | 2-3 | ||
Leadership & Collaboration | Number of PRs Approved Per Month | 2 | 2-7 | 20% |
Number of PRs Commented On Per Month | 3 | 2-7 | ||
Average Response Time for PR Approvals | 3 days | 0.75-2 days | ||
Average Response Time for PR Comments | 2 days | 0.75-1.5 days |
Quality Score
The Quality Score is calculated based on two metrics: Percentage of Rework and Percentage of Legacy Rework.
Percentage of Rework = 40% (lower is better, so score = 60%)
Percentage of Legacy Rework = 35% (lower is better, so score = 65%)
If the Quality factor is given a Weight of 20% in the Trellis Profile, the Quality Score would be calculated as:
Impact Score
The Impact Score is calculated based on two metrics: High-Impact Bugs Worked On Per Month and High-Impact Stories Worked On Per Month.
If the Impact factor is given a Weight of 15% in the Trellis Profile, and the benchmarks for these metrics are set to 2-3 for Bugs and 4-6 for Stories, the Impact Score would be calculated as:
High-Impact Bugs Worked On Per Month = 1 (below benchmark, so score = 33.33%)
High-Impact Stories Worked On Per Month = 3 (below benchmark, so score = 50%)
Volume Score
The Volume Score is calculated based on six metrics.
If the Volume factor is given a Weight of 25% in the Trellis Profile, and the benchmarks for these metrics are set as per the recommended ranges, the Volume Score would be calculated as the average of the individual metric scores multiplied by the weight.
Number of PRs per month = 4 (below benchmark, so score = 80%)
Number of Commits per month = 8 (below benchmark, so score = 80%)
Lines of Code per month = 100 (below benchmark, so score = 80%)
Number of Bugs Worked On Per Month = 2 (within benchmark, so score = 100%)
Number of Stories Worked On Per Month = 4 (below benchmark, so score = 80%)
Speed Score
The Speed Score is calculated based on three metrics: Average Coding Days per Week, Average PR Cycle Time, and Average Time Spent Working On Issues.
If the Speed factor is given a Weight of 10% in the Trellis Profile, and the benchmarks for these metrics are set as per the recommended ranges, the Speed Score would be calculated as the average of the individual metric scores multiplied by the weight.
Average Coding Days per Week = 2.5 (below benchmark, so score = 78.13%)
Average PR Cycle Time = 9 days (above benchmark, so score = 0%)
Average Time Spent Working On Issues = 6 days (above benchmark, so score = 0%)
Proficiency Score
The Proficiency Score is calculated based on two metrics: Technical Breadth and Repo Breadth.
If the Proficiency factor is given a Weight of 10% in the Trellis Profile, and the benchmarks for these metrics are set to 2-3 for both, the Proficiency Score would be calculated as:
Technical Breadth = 2 (within benchmark, so score = 100%)
Repo Breadth = 1 (below benchmark, so score = 50%)
Leadership & Collaboration Score
The Leadership and Collaboration Score is calculated based on four metrics: Number of PRs Approved Per Month, Number of PRs Commented On Per Month, Average Response Time for PR Approvals, and Average Response Time for PR Comments.
If the Leadership and Collaboration factor is given a Weight of 20% in the Trellis Profile, and the benchmarks for these metrics are set as per the recommended ranges, the Leadership and Collaboration Score would be calculated as the average of the individual metric scores multiplied by the weight.
Number of PRs Approved Per Month = 2 (within benchmark, so score = 100%)
Number of PRs Commented On Per Month = 3 (within benchmark, so score = 100%)
Average Response Time for PR Approvals = 3 days (above benchmark, so score = 0%)
Average Response Time for PR Comments = 2 days (above benchmark, so score = 0%)
Leadership & Collaboration Score = (100% + 100% + 0% + 0%) / 4 * 0.2 (weight) = 10 / 20
Overall Trellis Score
The overall Trellis Score for the developer would be the sum of the individual factor scores, providing a comprehensive assessment of their performance across various dimensions.
Overall Trellis Score = Quality Score + Impact Score + Volume Score + Speed Score + Proficiency Score + Leadership & Collaboration Score
The breakdown of scores for each factor highlights areas where the developer is performing well (Quality, Volume, Proficiency) and areas that need improvement (Impact, Speed, Leadership & Collaboration).
Last updated