Imagine a city with well-planned roads, traffic signals, and public transport. Now, imagine a chaotic city where roads are frequently blocked, signals are ignored, and buses take up all the space on the streets. Which city do you think would offer a better experience for its citizens?
Designing scalable customizations in Microsoft Dataverse is like city planning, it requires thoughtful design to ensure smooth operations, prevent congestion (performance issues), and avoid unnecessary bottlenecks (errors).
Many developers unknowingly introduce performance issues when customizing Dataverse, leading to errors like slow requests, deadlocks, and SQL timeouts. These are not random problems but the result of poor design choices that impact system performance.
This blog will break down common mistakes, their real-world metaphors, and how to design scalable, efficient, and resilient Dataverse customizations.
🚦 The Challenge: Understanding Bottlenecks in Dataverse
Developers often assume that Dataverse automatically scales to handle all workloads without issues. While Dataverse is powerful, it enforces certain constraints to ensure system stability and prevent a single user or process from consuming too many resources.
🛑 Common Symptoms of Poor Design
| Symptom | Real-World Metaphor | Example Scenario in Dataverse |
|---|---|---|
| Slow requests | Like waiting in a long queue at a busy store. | A poorly designed plug-in fetches too much data, slowing the user interface. |
| SQL timeouts | Like a slow-moving checkout line with too many customers. | A batch job updating thousands of records at once causes a timeout. |
| Deadlocks | Like two cars stuck in an intersection, neither able to move forward. | Two plug-ins try to update the same record simultaneously, causing conflicts. |
| Limited throughput | Like trying to fill a swimming pool using a small pipe. | Using ExecuteMultiple to send thousands of requests at once slows the system. |
| Intermittent errors | Like Wi-Fi dropping in and out. | A plug-in runs fine one moment but fails the next due to resource constraints. |
🚨 The Wrong Approach: Blaming the Platform
When these issues arise, many assume the platform is at fault rather than their customizations. Some may even request to remove Dataverse constraints to allow their queries to run longer.
But imagine removing traffic signals to reduce wait times it might seem like a good idea at first, but in reality, it leads to chaos and gridlock. Similarly, loosening Dataverse constraints would only make performance problems worse for all users.
Instead of bypassing these limits, the best solution is to optimize custom implementations to work efficiently within the system’s design.
Understanding the Causes of Performance Issues
The root causes of performance problems in Dataverse often boil down to three key factors:
1️⃣ Long-Running Transactions – “The Never-Ending Queue”
Problem: Holding a database transaction open for too long blocks other operations.
Example: A plug-in retrieves, processes, and updates multiple records in one go.
Fix:
- Break large transactions into smaller, manageable units.
- Use asynchronous processing for bulk operations.
- Minimize the data fetched in plug-ins to only what is required.
2️⃣ Database Blocking – “The One-Lane Roadblock”
Problem: Multiple operations try to modify the same records at the same time, leading to contention.
Example: Two plug-ins attempt to update the same Account record simultaneously, causing deadlocks.
Fix:
- Use Optimistic Concurrency Control (checking if data has changed before updating).
- Reduce the number of records locked at the same time.
- Avoid nested updates, where one operation updates a record that another operation also needs.
3️⃣ Complex Queries – “The Slowest Checkout Line”
Problem: Running heavy queries with multiple joins, filters, and subqueries slows down performance.
Example: A custom FetchXML query retrieves all records instead of filtering only the necessary data.
Fix:
- Use indexed fields for filtering and searching.
- Optimize FetchXML queries by avoiding unnecessary joins.
- Use RetrieveMultiple with paging to fetch data in smaller chunks.
Designing for Dataverse Constraints: Best Practices
Microsoft Dataverse enforces important constraints to ensure stability, but with proper design, these constraints rarely become an issue.
🚀 1. Plug-in Execution Timeouts – “Finish the Race Before the Clock Runs Out”
Issue – Plug-ins timeout after 2 minutes, which prevents long-running processes from overloading the system.
Solution – Offload heavy tasks to Power Automate flows or Azure Functions.
Example Fix:
❌ Bad: Fetching and updating 10,000 records inside a plug-in.
✅ Good: Using an Azure Function to handle bulk operations asynchronously.
🛑 2. SQL Timeouts – “Don’t Keep the Server Waiting Too Long”
Issue – Queries time out after 120 seconds if they take too long to execute.
Solution – Optimize FetchXML queries and avoid retrieving too much data at once.
Example Fix:
❌ Bad: Running a query that fetches all records in a table.
✅ Good: Using a filtered query to fetch only required records.
🔁 3. Avoiding Deadlocks – “Let’s Not Fight Over the Same Record”
Issue – Multiple processes trying to update the same record simultaneously can cause deadlocks.
Solution – Reduce record locking and avoid nested updates.
Example Fix:
❌ Bad: Two plug-ins modifying the same Order record at the same time.
✅ Good: Using pre-validation steps to check record changes before processing.
📦 4. Avoiding Service Protection API Limits – “Don’t Overwhelm the System”
Issue – Sending too many API calls in a short time results in throttling.
Solution – Use batch processing (ExecuteMultiple) carefully and respect service protection limits.
Example Fix:
❌ Bad: Making 500 individual API calls in a loop.
✅ Good: Using ExecuteMultiple to send API requests in batches.
🎯 The Right Way to Scale in Dataverse
To scale efficiently in Dataverse, follow these golden rules:
✅ Keep transactions short – The longer a transaction runs, the more it impacts performance.
✅ Use batch processing correctly – Don’t overwhelm Dataverse with too many requests at once.
✅ Optimize queries – Avoid fetching more data than necessary.
✅ Leverage Power Automate and Azure Functions – Move heavy operations outside of plug-ins.
✅ Understand platform constraints – Work within Dataverse limits instead of trying to bypass them.
Conclusion: Designing for Performance & Stability
Instead of blaming Dataverse for performance issues, treat long-running transactions, database blocking, and complex queries like a traffic jam that needs better planning.
By following best practices, you’ll create scalable solutions that improve performance, reduce errors, and enhance user experience.
