Pain points of Java Performance

Java performance is a concern of interest for all Java application developers, given that making an application fast is as vital as making it functional. There are three types of standard performance issues:

Database issues, that primarily relate to persistence configuration, caching or database connection thread pool configuration.
Memory issues, that generally are garbage collection configuration error or memory leakages.
Concurrency issues, and generally deadlocks, gridlocks and thread pool configuration issues.



Since database is the standard component of an application capability, it also is the basic root of performance issues. Issues may occur due to wrong use of access to the database, bad connection pool size or missing out on tuning.

Perseverance setup

Even though today Hibernate and other JPA executions provide fine tuning of database access, there are some more choices such as excited or lazy bring, that may result in long response times and database overheads. Eager fetching earn less however more complex database calls, whereas lazy bring makes more however more simple and fast database calls.

Problems occur when the load of the application increases and it triggers a much larger database load. So, in order to fix this, you can have a look at business deal counters, the database counters, however essentially at the correlation in between a business transaction and database calls. To avoid such issues you should comprehend well the perseverance technology made use of, set properly all setup choices, so as to match their performance with your business domain requirements.


Caching has optimized the performance of applications, since in-memory data is faster to gain access to than continued ones. Problems are triggered when no caching is used, so whenever a resource is required it is retrieved from database. When caching is utilized, problems happen due to its bad setup. Basic things to observe here are the taken care of size of a cache and the distributed cache setup. Cached things are stateful, unlike pools that provide stateless items. So a cache should be correctly set up so as not to tire memory. But what if a removed object is requested again? This ‘miss’ ratio has to be set up in cache settings, in addition to the memory.

Dispersed caching may also trigger problems. Synchronization is necessary when caches are set to multiple servers. Therefore, a cache upgrade is propagated to caches in all servers. This is how consistency is achieved, however it is a very expensive treatment. When caching is used correctly the application load increase does not increase the database load, but when the caching settings are wrong, then the database load boosts, triggering CPU overhead an even disk I/O rate.

In order to repair this issue you ought to initially analyze the database performance so about decide if cache is needed or not. Then, you must determine the cache size, using the hit ratio and miss out on ratio metrics. You can avoid facing caching issues though, by planning properly your application prior to developing it. Make sure to use serialization and techniques that provide a scalable application.

Pool connections

Pool connections are normally developed before starting the application, since they are pricey to create. A pool of connections is shared across the deals and the pool size limits the database load. Pool size is necessary. Insufficient connections make business transactions to wait and the database is under-utilized. On the other hand, a lot of connections trigger larger response time and database overload. In order to fix this issue you need to inspect whether your application is waiting for a new connection or for a database query to be performed. You can always avoid it however, by optimizing the database, test the application with different pool size to examine which fits the case.



Memory issues have to do with Garbage Collector and memory leakages.

Garbage Collector

Garbage collection might trigger all threads to drop in order to reclaim memory. When this treatment takes too much time or takes place frequently, then there is a problem. Its basic signs are the CPU spikes and big response times. To fix this you can configure your -verbosegc params, use a performance monitoring tool to discover significant GC occurs, and a tool to keep track of heap use and possible CPU spikes. It is nearly difficult to prevent this problem, though can limit it by setting up heap size and cycling your JVM.

Memory leakages

Memory leakages in Java might happen in various methods than C or C++, because they are more of a reference management problem. In Java a reference to an object may be maintained despite the fact that it might not be made use of once more. This might result in an OutOfMemory error and require a JVM reboot. When the memory usage is enhanced and the heap runs out of memory then the memory leakage concern has actually taken place. To resolve it, you would configure the JVM params effectively. To prevent having to deal with memory leaks, you can focus while coding to memory leak– delicate Java collections, or session management. You can share memory leakages avoid tips with colleagues, have an expert have a look at your application code, and use tools to prevent memory leaks and analyze heap.



Concurrency happens when several calculations are performed at the same time. Java utilizes synchronization and locks to handle multithreading. But synchronization can cause thread deadlocks, gridlocks and thread pool size issues.

Thread deadlocks

Thread deadlocks take place when 2 or more threads are trying to gain access to exact same resources and the one is waiting for the other one to release a resource and vice versa. When a deadlock occurs the JVM exhausts all threads and the application is getting slower. Deadlocks are very hard to replicate. So, a way to address a deadlock issue is to catch a thread dump while 2 threads are deadlocked and analyze stack traces of the threads. To prevent this issue you ‘d better make your application and its resources as immutable as possible, make use of synchronization and check for possible threads interactions.

Thread gridlocks

Thread gridlocks might happen when excessive synchronization is used and hence excessive time is spent awaiting a single resource. To observe this, you need to have both slow response times and low CPU utilization, given that many threads try to access the very same code part and they are waiting for the one that has it to finish. So, how can you fix this? You need to first inspect where your threads are waiting and why. Then, you ought to remove the synchronization requirements according to your business requirements.

Thread pool setup locks

When an application utilizes an application server or a web container, a thread pool is used to control the simultaneously processed demands. If this thread pool is small, then the requests will wait a lot, but if it is too huge, then the processing resources will be too busy. So, at a small pool size the CPU is underutilized however the thread pool utilization is 100 %, whereas at a huge pool size the CPU is extremely busy.

You can fix this issue easily, by checking your thread pool utilization and CPU usage and choose whether to enhance or decrease the pool size. To avoid it, you have to tune the thread pool, which is not so easy to do. Finally, 2 standard concerns that might take place are the performance concern to be an afterthought, or the performance concern to be noticed by the end users.

The first case is a common issue. Typically developers create an application that is functional however fails in performance tests. To solve this they usually have to make an architectural review of the application, where performance analysis tools seem very helpful. To prevent this issue, try to test performance while developing the application, so continuous integration is the secret.

For the 2nd case, what happens when end users of the application inform you that there are performance concerns? There are tools to prevent this case, such as JMX to examine your servers habits. Business Transaction Performance results incorporated with JMX results may help too. Method-level response time checks all techniques hired a business deal and finds hotspots of the application. So, you ‘d much better use among these tools, so that end users will never inform you for performance.

Post a comment