
Lessons Learned from Dotnet Upgrades
- AUTHORWill Osbourne
- PUBLISHED2023-12-31
Lessons Learned from Dotnet Upgrades
Let’s get one thing out of the way first, this is not a guide to upgrading your .NET Framework project to .NET 6(+). I am here to share some pitfalls from my experience of doing two .NET upgrades in 2023 to help anyone else that may be trying to go through a similar process in the new year (and beyond). I think that learning and sharing knowledge gained along the way, even if I’m not a comprehensive expert on the topic is valuable. First, we should lay the ground rules of the stacks I was dealing with. Project A was a .NET MVC application using Entity Framework, Kendo MVC, and a heaping helping of jQuery on the side. It had little in the way of third party API integrations, nor any notable, heavy weight nuget packages. A’s biggest challenge was it was a very old codebase that had seen many hands, and was a nightmare of bad practices. Project B was again, a .NET MVC application using Entity Framework, ASP.NET’s Identity system, Bootstrap, and DevExtreme (a jQuery based UI library). The codebase was smaller & cleaner, but not perfect. It also had a share of external dependencies in the form of REST API integration, NuGet SDKs handling API integration, SAML authentication (both Service & Identity Provider sides).
What Went Wrong
Across both projects, I narrowed down a few of the bigger stumbling points I ran into, and how I think they could be avoided (or at worst, foreseen & planned for):
- The Entity Framework 6 to Entity Framework Core change
- Why Model / ViewModel best practices are extra important, and what happens when you don’t follow them
- AJAX cleanliness, consistency, and consolidation
- Looking at heavy NuGet packages’ upgrade paths before you start
- Have a regression testing plan created before the upgrade starts Another note that will be present through most of these lessons is that compilation errors are not half the battle, they are maybe a quarter. The biggest time sinks that I didn’t originally expect were runtime errors. Errors that only come up from actually using and hitting every little code path are hard to find, and typically don’t allow for a lot of pre-planning & delegation. Once the program compiles & broadly works from a basic standpoint, then begins a long, arduous game of whack-a-mole going through the site & seeing what trips up.
1. Entity Framework
If you’re using .NET, theres a good chance you’re using Entity Framework. Despite the similar naming, Entity Framework 6 and Entity Framework Core are different frameworks. They’re very similar, with a lot of comparable methods & class names, but the feature set is not completely 1-to-1, if you’re interested, this Microsoft article details the differences. Don’t fret though, the changes aren’t too bad for typical use cases.
The first EF consideration I’ll point out is the database configuration, like most .NET configurations, have moved to the Program.cs
, and follow similar new syntax conventions.
Second, EF migrations really don’t… migrate. I recommend treating the EF6 migrations as static from then on, and creating a new folder for EFCore migrations. EF will create a new table to track it’s migration history as well, so don’t sweat when you see an __EFMigrationsHistory
table next to your existing __MigrationHistory
table in your database.
Next, looking towards the implementation of your existing models, the EFCore Data Annotations have changed slightly, the one I encountered the most was Compound Primary Keys need explicit declaration of which fields are used in which order. Many-to-Many relations declarations have moved to be in the OnModelCreating
function, rather than definable directly in the models.
Lastly, one project of mine had very minor reliance on some Lazy Loading, with it only being implemented on a few models (this was probably a code smell), and when moving to EFCore, it seemed I really need to add it to everything. That wouldn’t be so bad if it weren’t for the next topic:
2. Model / ViewModel Cleanliness
Looking back a bit, when I started with .NET MVC, I didn’t initially understand the necessity of a ViewModel, I thought “well my Model is pretty simple, and it’s what I inevitably need, why would I just duplicate the file?” Well, I have learned my lesson. Picking back up from EF, the enabling of Lazy Loading on all my Models, and then trying to serialize them to JSON to return to the client lead to cyclic object serialization issues. This drastically extended my time for the upgrade because it lead to runtime errors, which I had to encounter to find. Then I (generally) had to create a new ViewModel to handle the return to prevent EFCore lazy loading from going wild during serialization (if anyone knows how this could have been handled better, I’m all ears).
3. AJAX Consistency
The move to .NET 6 showed some inconsistencies in how we were handling data between the client & server. Both projects used jQuery AJAX calls pretty heavily, and there was some variation in how we used them. The two biggest things that caught us were variable casing inconsistencies and POST data submission inconsistencies.
For casing, our .NET Framework projects were returning casing matching the model casing, which was typically PascalCase, but not consistently. The JS was handling things as it was given, situationally. Moving to .NET 6, the default was to convert everything to camelCase. We could have turned this off, but we chose leave it on and take the time to make the frontend casing more consistent & predictable.
In terms of data transfer, there seemed to be no rhyme or reason to if we were sending our POST data to the backend as JSON or Form Data. With .NET Framework, this was no real issue, if the data was passed, it could be retrieved as either a ViewModel or individual controller function parameters. In .NET 6, we were required to be more explicit about where the data was coming from with [FromBody]
and [FromForm]
denoting JSON and Form Data respectively, as well as needing to make new View Models to better handle some inputs.
4. NuGet Package Upgrade Paths
I’ll keep this short, as its very situational, but take a look at your NuGet packages, especially critical ones. A lot of packages these days have .NET Framework & 6+ versions, with little code changes necessary for the upgrade, but not everything. In my Project B, we had a critical feature of our site dependent on a NuGet SDK that had been completely deprecated and no support / upgrade path for .NET 6. We didn’t realize this before beginning, and when we got to that section of the site, it had to be completely re-written to use a REST API rather than the SDK we started with. Sometimes there’s no getting around hurdles like this, but taking a look before everything gets started & understanding the time cost to non-linear package upgrades is important for accurate estimates.
5. Have a Regression Testing Plan Ready from the Start
Not every project can have a well funded, time-rich QA team, but undertaking an upgrade like this warrants as much pre- and post-testing as can be afforded, either by a QA or just the developers handling it. Before you start your upgrade, I highly recommend having a Regression Testing Plan that covers your whole application, and (especially if the plan is created for this upgrade) running it against your existing application. This is critical because, lets be honest, apps have bugs. It’s important to know what bugs are pre-existing and what bugs have been added from the upgrade, and to know exactly what actions are expected & unexpected. If you have a QA team, it will also assist in a tighter feedback loop with everyone understanding what is expected to be working, and importantly, what is expected to be broken. This isn’t to say you can’t take time during the upgrade to fix those small annoyances, I actually encourage that, but knowing what bugs are pre-existing allows you to appropriately prioritize them. If you’re close to running out of time, you may find small pre-existing bugs that you have to acknowledge & backlog, instead of dropping & switching contexts to fix them. Having the validation handled from the beginning saves time at the stressful end of a long upgrade.
What You Can Do to Prepare
It’s tough to be 100% ready for something with as many unknowns as a .NET upgrade, but you can try your best to get to around 90% ready. From what I’ve discussed here, I recommend analyzing your Entity Framework setup, specifically any more complex models & relations you have, and understanding that they will need more attention. Be aware and honest about the cleanliness of your codebase, and be understand that inconsistencies will lead to issues, and make the issues you face more varied rather than more predictable and repeatable. Look into your critical NuGet packages, most Microsoft/dotnet packages should be alright, but take note of ones specifically for your business, they may not have an easy upgrade path, and you should pre-plan for what that means in terms of effort. Finally, please, please have a regression testing plan before you start. You simply must test every piece of the app after the upgrade, so you should do the same before so that you know exactly what the outcome should look like.
Closing Remarks
I hope I haven’t discouraged anyone from looking to upgrade their codebase, it can be tough, but it is absolutely worth it. Getting your application to a more cross-platform, more sustainable, more enjoyable version of dotnet is attainable, and I want you to try to do it. Lean into the challenge, and hey, you’re going to regression test the site anyways (right?), so take a bit of time with it to refactor those pieces of the codebase that have been getting on your nerves. If you want additional help for your team’s upgrade to .NET 6, feel free to reach out to Seven Hills Technology and ask for Will. My experienced coworkers who’ve helped me along with these upgrades and I would be happy to talk more about our experiences and see if we could be a fit to help you along with your upgrade!