Performance always matters for CodeRush. Productivity tools, by their very definition, should not slow you down.
With the recent CodeRush migration to Microsoft’s Roslyn engine, we eliminated the cost of maintaining an array of language infrastructures. Visual Studio already parses, performs semantic analysis, stores solution structure, and searches for symbols, all of which allows CodeRush to use less memory and increase performance.
However, relying upon the Roslyn engine alone does not solve every performance challenge. CodeRush is a complex product with many points of Visual Studio integration where event listening and custom editor rendering are necessary. And in areas where Visual Studio provides insufficient functionality, CodeRush fills the gap with its own core engine. All these factors can introduce performance challenges, which, if handled improperly, could result in slowdowns and perhaps a “yellow bar” performance warning appearing inside Visual Studio.
Over the years performance has been a top a priority for our team. And our work for many of those years has been reactive: a customer experiences a performance issue; reports it; and then we hunt it down and kill it. This approach often yields positive results, provided we can reproduce the issue. However sometimes we may not have enough information to reproduce the slowdown, or customers may not have been able to provide the essential details needed, for example the results from running a PerfView analysis.
CodeRush Performance Logging
To improve the quality of the data our developers use to make CodeRush faster, in recent years we introduced a performance logging system, which gives CodeRush the opportunity to continually monitor its own performance in critical code sections, and to record any issues, if found, to the CodeRush performance log file.
We monitor many events within both CodeRush’s and Visual Studio’s performance, including:
CodeRush Startup
- Loading the entire product.
- Starting each separate CodeRush service.
- Individual initialization for selected services.
Editor & IDE Events
- The time spent on each individual handler
- The time spent processing the entire event (all event handlers)
Visual Studio Interaction
- Checking the availability of menu items, toolbar buttons, commands, etc.
- Measuring the execution time of user interface items.
- Editor rendering time.
Dispatcher.Invoke
- Any CodeRush code executed through Dispatcher.Invoke.
Since this code is executed on the main UI thread, it needs to be both absolutely necessary and optimized for speed, or performance will suffer. And so CodeRush monitors all of its code blocks executed on the main thread in Visual Studio.
Refactorings & Code Generators
- Availability checks (determining which refactorings/generators are available)
- Execution times (how long it takes to apply a refactoring/generator)
Today, with the latest performance logging, if a customer experiences a slowdown, the CodeRush logs are more likely to contain the data needed to show what’s happening and why. And since we introduced performance logging, our support team has been able to more quickly diagnose and solve reported issues.
If you are a member of the DevExpress Customer Experience Program, details of any slowdowns are sent automatically to our support team for analysis. This allows us to get a broader picture of what our customers are experiencing – what causes the slowdown, and under what conditions.
We can also monitor our progress and regression across versions, and as a result, make our work more systematic and strategic.
Strategic Response
Before each sprint, we collect analyze performance data - causes, total number of events, and total sum of all the delays.
In order to more objectively compare changes across minor releases, for every performance event we compare the total delays for that event divided by the sum of all delays for all reported events. This helps us prioritize which events we focus on for each release, and we use that as our primary metric. It’s essentially how we normalize all this performance data across all the releases. Our sprints are 30 days apart, so we get about 12 releases a year.
For each release, we compare the metrics collected for the current release against those of the previous release. For each event, our strategic response depends upon the results of this comparison:
- Minor changes to the metric, either small improvements or small degradations, generally mean nothing has changed. If work had been performed attempting to improve performance, then it was likely unsuccessful.
- Significant decreases in the metric means customers are experiencing substantially shorter delays per event (CodeRush is noticeably faster). We work to analyze our changes for this version and make sure we have a reason to explain the change. If we worked to improve performance, this drop in the metric confirms we are moving in the right direction. We can continue to improve performance further, or move on to another challenge if we’re satisfied with the results.
- Significant increases in the metric means customers are experiencing a longer average delay per performance event. In this case we look closely at the changes across sprints so we can understand what might have caused the change. And then we prioritize addressing it for the next sprint.
- A performance event has completely disappeared. While on the surface this may appear like a good thing, a significant event like this always warrants deeper investigation. It could mean our efforts led to a complete correction of an issue, or it could mean that perhaps due to changes in the code, one event became another. So our response here is to meticulously analyze the data so we can understand why.
- A new performance event appeared that has not been previously identified. This almost always happens because we’ve extended the detail of our logging into a new area. For example, if it is unclear what is causing a delay at the start of the service, we might need more logging for the individual stages of that service’s start. This can sometimes cause new performance events to be added as children of the old event.
The important point here is that when the performance data changes across releases, we investigate the cause and verify if the changes were accidental or intentional.
Real World Data
Here are some actual results of CodeRush's performance logging and our data analysis, comparing CodeRush releases 20.2.3 against 20.2.11.
In CodeRush v20.2.3, we had 58 active performance events.
In v20.2.11:
- For 29 of these events, we managed to achieve performance improvements of more than 50 percent.
- For 5 of these events, we achieved performance improvements of more than 80 percent.
- For 22 of these events, we completely fixed the performance issues and never saw them logged again.
Here’s a short summary of what we’ve accomplished in this time span:
Improved startup performance.
We completely rewrote the code engine responsible for interaction with Visual Studio in order to eliminate any slowdowns when Visual Studio loads. We also optimized a number of CodeRush internal services to speed up their work.
This graph shows our results over this time span for improving startup performance (smaller values on the y-axis are faster):
Improving Test Runner performance
A better test discovery engine lets the CodeRush Test Runner find tests in your solution faster. We also optimized internal services and reduced our project dependency graph build time, which improves test run speed.
Faster Rendering
Increasing rendering speed and optimization for Spell Checker, Rich Comments (including Image Embedding), Unused Code Highlighting, Structural Highlighting, and Member Icons.
Speeding up typing performance
Improving typing performance in the Visual Studio editor by optimizing String Format Assistant, Naming Assistant, Smart Semicolon, Smart Dot, IntelliRush, and Code Template Expansion. We also made command availability checking faster.
The CodeRush team continues to work hard to win performance gains, and we look forward to achieving greater progress in this direction in future CodeRush releases.
Do please let us know what you think about CodeRush performance, and if there are areas where we can make CodeRush or Visual Studio even faster for you.