Laravel Application Performance Metrics That Actually Matter
In this Post
We know that our Laravel applications must be performant, but how do we know if our Laravel applications are performant? We know by measuring our applications’ performance. To know how well our apps perform, we evaluate their performance on a set of metrics. In this article, we will take a look at three specific measurements that form the foundation of our performance monitoring.
The Metrics Overwhelm Problem
If we do a web search for “Laravel performance metrics,” it won’t take long until we have a list of dozens of measurements that are “critical” for proper optimization. With so many to choose from, where do we even begin? How do we measure everything we are supposed to measure? Many developers get stuck on finding the perfect monitoring setup; they never actually start monitoring their applications.
The “we’ll monitor everything” approach seems like a solution: set up an APM dashboard to view all metrics in one place. But searching for the right APM tool (Pulse, Sentry, New Relic, Nightwatch, and dozens of others) creates the same analysis paralysis as selecting individual metrics. Once we have the service configured, we’re faced with the same problem as before: How do we know which metrics to test? Which are meaningful and which are not? This approach leads to noise, not insights. A comprehensive measurement system often prevents optimizations because it doesn’t actually tell us which measurements are important for our application.
Rather than monitoring everything in a comprehensive monitoring platform, I’d suggest starting with foundational metrics: three specific metrics that work for every Laravel application. We don’t need to measure everything. Measuring these three metrics will account for the vast majority, if not all, of our monitoring needs. Once we are actively monitoring these metrics and optimizing our app based on the insights, we may reach a point where our optimization requires something a little more specific to our app. At this point, we look for the bare minimum we need to add to our system to track the specific metrics we need to measure.
Laravel’s Need for a Specific Approach
Laravel’s architecture impacts which metrics we care about and why we care about them. Laravel operates in PHP’s stateless request-response model, where each request bootstraps the entire application from scratch. The application state is not shared between requests. Its configurations, database connections, and services are loaded on each request. While this creates some optimization constraints, it also offers some advantages. The biggest benefit is the lifecycle’s predictability, which means that performance bottlenecks follow consistent patterns. We know exactly when things happen on each request, and we know they will happen the same way. This makes identifying the most impactful optimization much easier.
Another unique architectural peculiarity of Laravel is the Eloquent ORM, an Active Record pattern with magic methods to create an intuitive API. The ORM, while making the developers’ lives easier, can inadvertently generate inefficient queries. The N+1 problem is particularly common when using Eloquent’s lazy loading inside loops. While Eloquent provides eager loading to mitigate this inefficiency, we need to remember to use it.
In addition to Eloquent, Laravel’s extensive use of other abstraction layers (service container, facades, middleware) adds computational overhead, making measurement even more critical.
Laravel applications benefit from a backend-first performance approach. While frontend performance matters, Laravel typically defers heavy processing to the backend while frontend assets can be optimized through CDNs and streaming. This makes backend metrics more critical for Laravel-specific performance issues. So performance concerns are rarely exclusively frontend problems. Additionally, performance issues on the backend have a cascade effect, manifesting in user experience. So the benefits of optimization on the backend will be realized on the frontend. For this reason, the metrics we will focus on are application metrics, not frontend metrics.
The Foundation Metrics Framework
I am using the following criteria for individual foundation metrics:
- Universal Applicability: The metric is relevant to every Laravel application, regardless of its architecture.
- High Signal-to-Noise Ratio: It directly indicates actionable problems with low false positives.
- Measurable with Standard Tools: It can be measured using common Laravel ecosystem tools.
- Enables Troubleshooting: It offers clear connections to actionable optimization steps.
In addition to these individual metrics, the set of metrics must collectively provide comprehensive coverage of Laravel performance issues. My goal is to balance this coverage with actionable simplicity; the system should cover the most ground without becoming overwhelming or unsustainable.
The system is composed of application response time, database query count, and error rate. They cover most of the issues we will encounter in a Laravel application, and optimizing one often improves the others due to shared underlying causes. These three measurements enable a complete and effective troubleshooting workflow.
Performance Metrics
Application Response Time
The application response time measures the total time from when Laravel receives an HTTP request to when it sends a complete response back to the client. The measurement includes routing, middleware, controllers, database queries, business logic, and view rendering. It does not account for network latency or client-side rendering.
This metric directly correlates to user experience. It is the most immediate indicator of application health under various conditions. While it does little to pinpoint the cause of the issue, it is great at identifying problems and measuring the impact of optimizations.
In development, we can measure response time using Laravel Debugbar. In production, Laravel Pulse or APM tools can track the response time of all requests from all users. In the tools, we can gather significant amounts of data on the performance of our production environment. These systems can be set to show averages and percentiles. Looking at the median, P50, can be useful for seeing what we might expect the average user to experience. However, including a greater percent of users (e.g., P95 or P99) in our evaluation is better for identifying issues, as we care more about what most users experience than what the average user experiences.
Response time does not have universal measurements that are good or bad. The response time that we should target may vary from one app to another, as well as between request types. As general guidelines: target 200ms for simple requests and 500ms for complex operations. Consistently exceeding 500ms warrants investigation, and anything above 1 second likely indicates fixable performance issues. If our app consistently measures above 1 second, there is very likely an issue that we can fix or things we can optimize that will have significant impacts on user experience. However, these are guidelines that apply to most Laravel apps but are not universal.
Response time is useful when measuring an app’s performance against its past values during optimization. Degradation in this metric often indicates areas for investigation. It is a good starting point for performance investigation workflows, as it offers the most comprehensive indication of progress.
Database Query Count
Database query count measures the number of SQL queries executed per HTTP request. This is especially crucial for Laravel apps where the convenience of Eloquent makes it easy to overlook potential inefficiencies. The textbook example of this is the N+1 problem, Laravel’s most common performance killer.
The N+1 Problem
Eloquent makes it simple to get a collection of rows from a database as Eloquent models,
User::all()
, which provides a simple interface for finding related records:$user->posts()
. While this is very convenient, it also makes it easy to run unnecessary queries. Eloquent is lazy-loaded, meaning we only get the records we request and we only get them when we request them. If we try to access records we did not previously request, it will run additional queries to get the records when we request them. So, if we run one query to fetch theUser
records without requesting the associatedPost
records, we will get the users without posts. If we then loop through those users, getting the users’ posts in each loop, then each of these loops will run another query for the posts. So, we ranN
queries, 1 for every user, to get their posts, plus 1 initial query to get the users, henceN+1
. If we have 1,000 users, we just ran 1,001 queries. If, when we initially gathered our users, however, we joined our posts table to get each user’s posts, when looping through the users we would already have their posts so we do not need to query for them again, reducing 1,001 queries down to a single query.If we were writing this in raw SQL and looping through the results of a standard array, this problem would be easy to spot. Because when we try to access the user’s posts in the loop, we will realize that we do not have access to the posts yet. So, we would go back to the query and add the JOIN. However, because Eloquent’s lazy loading allows us to use records we did not initially request, this problem can be easy to overlook. Eloquent provides a solution, eager loading, which allows us to load the records before using them, but we need to remember to use it.
Database interactions are the primary bottleneck in most Laravel applications. Since Laravel’s abstractions can hide inefficient database patterns, these bottlenecks can be hard to spot, especially if we do not have much data. These inefficiencies present a scaling problem as the queries multiply with data volume.
In development, tools like Laravel Debugbar can count queries, display duplicate queries, and track query execution time. APM tools have similar capabilities in production.
In general, we should limit the number of queries as much as possible. If we find a simple page running more than 10 queries, it might be something to look into. We should expect more complex pages to reach low double digits. If we are running more than 50 queries per request, or if query counts scale with page content (5 items = 10 queries, 50 items = 100 queries), we likely have N+1 problems to fix.
Error Rate
Error rate measures the percentage of requests resulting in server errors (5xx status codes). Beyond their obvious business impact, errors create performance drains through increased resource consumption, retry cascades, and error handling overhead. Additionally, they have a direct business impact, often resulting in a loss of user trust. From the user’s perspective, reliability is often linked to perceived performance as well. This metric can also be used loosely as a validation metric for optimization efforts.
Even if frequent errors in our application seem to have little performance impact, they are leading indicators of system instability and potential capacity issues as our application and user base scale.
Error rates can be measured in development with Laravel logging and exception handling. In production, error tracking services like Sentry and APM integrations can measure error rates. It is important to track error patterns over time, not just absolute, immediate numbers.
The application’s criticality impacts the numbers we should be targeting. But in general, with a sufficient number of requests, 0.1-1% is considered good. An error rate above 1-2% indicates a likely problem to address. Sudden spikes or sustained high error rates indicate critical issues.
To begin troubleshooting, we identify any correlation between error spikes and deployments, traffic increases, or known infrastructure issues or changes. Most error tracking systems make it easy to see where the error occurred and what caused it, so that will give us a head start in tracking the problem down.
How the Metrics Work Together
These metrics are interconnected. Database issues, as measured by a high query count, directly result in increased response times. Increased response times and high query counts can both result in increased error rates under load. Error spikes can cascade to affect response times for all users. So, optimizing one metric often improves others, and they often lead back to database interactions as a shared underlying cause.
When we’re troubleshooting and optimizing, response time is the best place to start. It gives us an objective baseline to see if our improvements are actually having an effect. It is also the best metric to identify user-facing performance problems. With our baseline identified, we investigate query counts to address the most common Laravel bottlenecks, looking for places in our app that result in many queries. Through the optimization process, we monitor error rates to ensure our efforts do not compromise stability, comparing the errors we get to the errors seen consistently in production. The optimization process is iterative. We use these metrics together to validate optimization effectiveness as we iterate.
Performance Investigation Workflow
- Start with Response Time: Establish our baseline and identify if there’s a user-facing problem.
- Investigate Query Count: Look for N+1 problems and inefficient database patterns.
- Monitor Error Rate: Ensure optimizations don’t compromise stability.
Next-Level Considerations
These three metrics are a solid foundation that every Laravel developer should use. However, this is only the foundation. Every application has different needs, and these metrics will not be sufficient for some applications. We should start with these, then build from them as problems arise. There are a few specific areas that are common to Laravel apps that these metrics do not cover. I chose not to cover these specific areas because they are not features every app uses. They should be monitored, however, if we frequently interact with the utilities, or are developing high-traffic applications, complex background processing, or multi-service architectures.
- Queue performance: Background job monitoring is not covered by HTTP-focused metrics.
- External API dependencies: Resources that are not managed by our app may require additional timeout and failure rate monitoring.
- Caching effectiveness: Cache hit rates are important for optimization validation.
This should be a progressive approach. We start with the foundation first, then expand based on bottlenecks discovered. We do not prematurely add metrics to our monitoring strategy in anticipation of issues.
Getting Started and Next Steps
Now that you understand what to monitor, it’s time to start measuring your application’s performance. Here are your immediate action steps:
- Set up basic monitoring for the three foundational metrics.
- Establish baseline measurements before optimization efforts.
- Create simple alerting thresholds based on the provided guidelines.
While implementation details are beyond this post’s scope, countless tutorials exist for each tool. Do not get bogged down on finding the perfect tutorial or using the perfect tools. Just get started. I would recommend setting up Laravel Debugbar for development purposes. And if you want a nice interface to track things in, look into Laravel Pulse for production environments and Laravel Telescope for development.
Build a discipline of performance tracking. Review your metric trends regularly and use the metrics to guide optimization priorities. Treat measurement as a prerequisite to effective performance work. However, do not obsess over getting perfect numbers or needing constant improvement. Don’t let performance optimization stall true progress.
← Back