JVM Advent

The JVM Programming Advent Calendar

5 years in Spring, yet certification taught me this

After over 5 years of hands-on experience with Java and Spring/Spring Boot, I decided to take the plunge and go for the Spring Boot Certification. It’s one of those milestones many of us developers aim for, right?

Drawing from memories of my Oracle Java Certification learning process, I was ready for a long ride where theory often seems worlds apart from practice. However, as I delved deeper, I came across aspects of Spring Boot that don’t usually make it to your everyday news and tutorials.

In this article, I’m eager to share these unexpected findings. I’m happy that I’ve unraveled cool features that have genuinely shaped my understanding. Keeping things straightforward, I’ll explain these with real examples, aiming to enrich or simply refresh our collective knowledge of Spring Boot.

However, this is not a definitive guide on how to pass the Spring Professional Developer certification, and it is far from a comprehensive learning material.

This being said, let’s dive in…

TL;DR

  • Bean scopes are great, but when mixing them strange things start happening
  • @PreDestroy doesn’t get called for Prototype beans
  • BFFP vs BPP – one processes definitions, the other one processes instances
  • Autowiring – you can do it with collections as well – even with generics
  • AOP – it depends – JDK Dynamic proxies vs GCLIB
  • JDBC – cool callbacks
  • @MatrixVariable
  • antMatchers vs mvcMatchers – trailing slash
  • permitAll() vs web().ignoring()

Beans – scopes, lifecycle, and (some) processors

I’ve always known that the foundation of Spring is built on Beans, so understanding these would help me along the way. All the bean scopes are nice and aid you along the way to shape your application behavior; do you want a short-lived cache that spans over a single request? throw a @Bean over a method that returns a HashMap set the scope to REQUEST and let the framework do the rest; want to store a user’s preferences for the time that the user is online? simply create a UserPreferences bean and throw a SESSION scope on it. So simple, then how could this get weird?

Having fun with scopes

Imagine having a SINGLETON bean that depends on a PROTOTYPEbean. What does this translate to? In the bean creation and dependency injection phase, we will have one singleton bean instantiated and initialized, with a single prototype bean instance inside. However, for every request on the prototype bean, we need a new instance. How does Spring handle this? Well, there are two (main) ways in which we can dictate the behavior in this scenario:

  1. Proxy approach
    • By marking the prototype bean with @Scope(value = "prototype", proxyMode = ScopedProxyMode.TARGET_CLASS), Spring will inject a proxy into the singleton bean. Each time the singleton bean accesses the prototype bean, the proxy ensures that a new instance of the prototype bean is created and returned.
  2. Lookup Method Injection (the heck?) – ever heard of @Lookup?
    • You can provide a method that Spring will intercept and provide a new instance of the same type as the return type of the method, every time it’s called
    • There are two ways of doing this
      • The cleaner way:

      • The uglier way:

The key difference is that using the abstract method only works when the surrounding bean is created using component-scanning (@Component) because Spring dynamically implements the abstract method, the class itself should not be fully instantiated in the typical way (like a regular Spring bean – @Bean).

Not what you would expect from bean lifecycle

Scenario: you have a bean, you need to add behaviour after the bean is set up and before the bean is destroyed. Say (for the sake of the article) that you connect to a database before the bean is handed to you and when you’re done with it, the connections are to be released.

Spring conveniently offers two lifecycle annotations that you can use on your methods: @PostConstruct and @PreDestroy. Besides this, someone tells you that every time someone accesses the bean, you need to provide an isolated instance for “(almighty) security purposes”.

You go ahead and implement the logic accordingly, ship the code to production and move on to your weekend. You get a call Saturday evening that no one can access the application and the on-call guy let’s you know that the database doesn’t accept any more connections. You confidently tell him to restart the database and everything gets back to normal. Sunday evening the same thing happens, what did go wrong?

When confidently making use of the provided lifecycle annotations provided by the framework, you forgot to check how the scope of the bean works, because, who does that?

PROTOTYPE bean scope ensures a new instance of the bean is created every time it’s requested from the Spring container. Unlike Singleton, which ensures a single shared instance, Prototype creates a fresh bean for each request.

However, another key difference is that for PROTOTYPE beans, @PreDestroy is not invoked!.

The reason behind this is simple: Spring manages the full lifecycle of Singleton beans but with Prototype beans, it hands over the bean after initialization. The destruction or cleanup of prototype beans falls outside of Spring’s responsibility.

Imagine buying a coffee machine (Singleton Bean). You set it up once (@PostConstruct) and dispose of it when it’s no longer functional (@PreDestroy).

Now, think of a coffee capsule (Prototype Bean). You use a new one every time you make coffee @PostConstruct. But when you throw it away, it’s outside the machine’s responsibility (@PreDestroy doesn’t get called). Just like your database connections not getting cleaned up.

Behind the Beans: The Post Processor Diaries

While this topic deserves it’s own article, I would like just to point out one crucial difference that helped me differentiate between BeanFactoryPostProcessor and BeanPostProcessor. That is:

  • BeanFactoryPostProcessor operates at container level before any beans are instantiated and modifies the bean definitions themselves, not bean instances
    • one example use case for this is to modify the property values in the bean definition (replace @Value annotated fields with actual property values)
  • BeanPostProcessor acts on instances of beans, so after the bean has been instantiated and dependencies injected
    • this is where AOP proxies are usually set up around beans and methods

It is very important to be able to differentiate between the exact places in the bean lifecycle where each of these intervene so you save yourself some headaches when customizing your beans.

(Auto)Wiring for Success

Catch them all

We’re used by now to take advantage of our precious Dependency Injection mechanism and bring all the universe together into the same service. I’ve recently found out there is an easier, more straightforward way to inject all the beans you need. Have you ever Autowired a collection of beans? I know I didn’t. Spring offers a way of injecting all beans of a particular type into the same collection. Imagine you like spam and want to notify your users on all possible platforms available. You could trigger the notifications for each and one of the external notification channels you are using manually, or you could just unify them under a single interface and make use of dependency injection to simplify everything.

And then just autowire (yes, I know, stop using @Autowired) everything in the same List of beans.

Take a step back and think of how easy it is now to add another notification channel. SOLID much?

No matter what

Now let’s take this to another level. What if we implemented a generic notification MessageProcessor that we can specialize based on the type of the message? There’s no way we could autowire generics as well, right? Well…

In Java, due to type erasure, generic type information is lost at runtime. This should pose a challenge in dependency injection frameworks like Spring, where type matching is crucial for autowiring dependencies. However, Spring overcomes this limitation through the ResolvableType class. This class provides a way to capture and retain the full generic type information at runtime, enabling Spring to perform accurate type matching even for generic types.

Just like before, we define a generic interface MessageProcessor<T> where T represents the type of message, then implement the interface for different message types:

While you can still autowire all MessageProcessors like we did before, another cool trick is that you can also autowire by type, even if it’s generic:

Always Order Pizza (AOP)

Code against interfaces, not implementations

What does this have to do with Spring and AOP? Well, little did I know that behind the famous magical toolbox of Spring’s Aspect Oriented Programming implementation stand 2 different methods for creating proxies: GCLIB and JDK Dynamic Proxy.

Why do we need two of them? Because as everything else in programming, it depends. It depends on the context.

  • GCLIB proxies work by extending the target class. They generate a subclass at runtime and override the methods of the target class. These are ideal when proxying classes rather than interfaces, because CGLIB doesn’t require the target class to implement an interface. Therefore, GCLIB cannot proxy final classes or methods, as they can’t be overridden in the subclass.
  • JDK Dynamic proxies work by implementing the interfaces of the target class at runtime. They use reflection to invoke methods and require the target class to implement one or more interfaces. This approach is less invasive and doesn’t require subclassing, which makes it simpler and more transparent.

Spring uses JDK Dynamic Proxies by default when the bean implements interfaces, and CGLIB proxies when the bean does not. This approach allows Spring to handle a wide range of scenarios while maintaining compatibility and performance. If you think you know better, you can always explicitly specify the proxying mechanism to be used.

Proxying in circles

I want to take advantage of this opportunity and emphasize one thing that we all (kind of) know, but it’s not very straightforward for someone who still thinks of Spring as magic. Proxies are a great mechanism for separating our application business behaviour from crosscutting concerns (like security, logging etc) and Spring relies heavily on these. However, there are still limitations which might affect our application performance in ways we don’t expect. Let’s consider the following example:

At first glance, you might expect that the call to methodB would be cached on subsequent calls to methodA. It is important to understand this crucial difference.

When methodA is called from outside MyService, Spring’s AOP proxy intercepts this call. However, when methodAinternally calls methodB, this call does not go through the proxy. Instead, it’s a direct internal method call within the same object.

As a result, the @Cacheable annotation is effectively bypassed, and the caching behavior does not get applied.

Java(script) Database Connectivity

If you’ve ever worked even a bit with javascript, you must have heard about callback hell. Well, this is exactly what popped into my mind when studying Spring’s JdbcTemplate.

I know, we are all running away from SQL by using cool, fancy frameworks and ORMs for our data access layer, but we need to remember to always honour our elders.

For this particular certification, there seems to be a slight emphasis on the JdbcTemplate result handlers (callbacks), so I thought they were worth to be mentioned. I also found them to be cool and discovered the subtle differences between them.

  1. .query() callbacks:
    • RowMapper retrieves data from the ResultSet and returns an object representing each row. Useful for mapping the result row by row (as the name already implies)
    • ResultSetExtractor retrieves data from the ResultSet and returns an object representing the entire result. It’s useful for aggregating results or mapping complex relations.
    • RowCallbackHandler processes each row of the ResultSet individually, allowing for more memory-efficient processing, especially for large datasets. Slight difference here, this handler processes the rows, which means it does not return anything.
  2. What if you need Column Names and Values:
    • JdbcTemplate allows easy access to the column names and values from the result set, enabling the mapping of database columns to entity attributes in your Java application. How?
      • queryForList is used to retrieve a list of rows from the database. Each row is represented as a Map<String, Object>, where the keys are the column names, and the values are the corresponding column values. So, we’ll have a List<Map<String,Object>>, pretty, right?
      • queryForMap is used when you expect a single row in the result. It returns a Map<String, Object>where, similar to queryForList, the keys are the column names, and the values are the corresponding column values.
  3. Query and Update Methods:
    • query method is used for fetching data; returns data and accepts a callback for further mapping or processing
    • update method is used for insert, update, and delete operations; returns an integer indicating the number of rows affected
    • execute for executing general SQL statements, especially DDL or for complex database procedures (yes, you can also use it for DML, but it’s not recommended as it does not return the affected rows).

Path of Neo

Going over the Spring MVC chapter in the preparation book, I was pretty confident I can just have a quick overview and move on to the other chapters. After all, I have been working with MVCs and REST APIs since the beginning. What could possibly surprise me here?

My surprise was getting into the world of Matrix somewhere I didn’t expect. No, I’m not insane, not yet. What do I mean by this? I found out that @RequestParam and @PathVariable are not the only ways of mapping the URL parameters … (drums).

The @MatrixVariable annotation in Spring MVC offers a unique and flexible way to extract data from URL path segments. This annotation can be incredibly useful in scenarios where you need to deal with complex URL structures to retrieve specific data. I think it will all make sense if we consider the following example:

  1. Searching for Specific Coffee Blends:
    • URL Example: /coffees;roast=medium;origin=ethiopia;flavor=fruity
    • In this example, the URL is used to search for medium roast coffee blends from Ethiopia with fruity flavor notes. The @MatrixVariable annotation extracts the roast type (medium), origin (ethiopia), and flavor notes (fruity) from the URL path segment.
  2. Fetching Coffee Blend Details and Reviews:
    • URL Example: /coffees;roast=medium;origin=ethiopia;flavor=fruity/details;brand=BeanBrew/reviews
    • Here, the URL is structured to not only search for specific coffee blends but also to fetch detailed information and customer reviews for a particular brand (BeanBrew). This showcases how @MatrixVariable can be used for multi-level information retrieval within the same URL structure.

The code to handle such a scenario would look like this:

Pretty cool, right?

Locking things up

There’s hardly any Spring Boot-based application that does not make use of Spring Security, which is great. The developers are doing a great job of abstracting away all the complexity of security while offering great APIs to bootstrap the security you need in your application in as few lines of code as possible. It has evolved a lot in the past years, and it still does, getting easier and easier to use while offering much more.

DISCLAIMER: Please don’t implement your own security unless you’re 99.99% certain that you know what you are doing. This can also be said about configuring Spring Security. Make sure you understand what you are doing when disabling the one little configuration that we all disable (it’s popped into your mind while reading this, I know it).

Like that, there are other subtleties in the Spring Security configuration. I want to remind you just about a few of them—those that really sparked my interest.

1. antMatchers vs mvcMatchers

I realise that we live in an ideal world and all people work on the latest Spring / Spring Security version where deprecated code is instantly refactored and updated.

However, I still believe it’s worth mentioning that, while we now use .requestMatchers() , there were times when we had to decide between using antMatchersor mvcMatchers.

Did you take into consideration trailing slashes when making this decision? Or just tried out things until “they just worked”?

  • antMatchers utilizes Ant-style path patterns and does not automatically handle URL normalization for trailing slashes. This means .antMatchers("/secured") matches the exact /secured URL but not /secured/, potentially leaving endpoints accessible to unauthorized users due to slight URL variations.
  • mvcMatchers, on the other hand, aligns with Spring MVC’s URL interpretation. It is more comprehensive, matching /secured as well as /secured/, /secured.html, /secured.xyz, etc., thus handling potential configuration mistakes more effectively.

2. permitAll() vs web().ignoring()

I must confess I’m guilty of using .permitAll() any time I needed to exclude particular routes or resources from getting picked up by Spring Security. I know better now …

  •  permitAll()
    • Used within the httpSecurity configuration.
    • Allows all users, whether authenticated or not, to access a specified path.
    • Security Implications:
      • Requests to paths permitted by permitAll() still pass through the entire Spring Security filter chain.
      • This means that even though access is unrestricted, these requests are still subject to various security checks, including CSRF protection.
    • Ideal for paths that should be publicly accessible but still need some level of security checks, like a login page or public API endpoints.
  • web().ignoring()
    • Applied within the WebSecurity configuration.
    • Instructs Spring Security to completely bypass the security filter chain for specified paths.
    • Security Implications:
      • Bypassing the filter chain reduces processing overhead, enhancing performance for the specified paths.
      • Potential Security Risks: The complete bypass of security checks, including CSRF protection, can pose risks if incorrectly applied.
      • Inconsistencies and Monitoring Gaps: Since these requests do not pass through the filter chain, they are not logged or monitored by Spring Security, which can create blind spots in security monitoring.
    • Best suited for static resources like CSS, JavaScript, or public images, where security checks are unnecessary and performance is a priority.

Here’s an example showing how permitAll() and web().ignoring() can be used in a Spring Security configuration (this time upgraded for Spring Security 6+):

As previously stated, you should be aware of the mechanisms you are manipulating when configuring Spring Security, as one trailing slash could change your world.

Conclusion

Getting the Spring Professional Developer certification has been a lot of fun. One thing I loved is that after lots of real experience with Spring and Spring Boot I was able to get humble(d) and learn a lot of things again.

Finally, I’ve given myself the time and motivation to understand how the framework works, and I’m so glad I did. I really think that you should go through the preparation tools, whether you’re going for the certification or not. It will make a huge difference in your progress.

No matter how much we believe we know or how much experience we have, being humble will always help us learn new things and see our work in new ways. Learn more and share more.

Wishing you a wonderful holiday season!

May you rest and come back fresh and ready to tackle a new year!

Author: Vlad Dedita

Auror of the Spring Boot Ministry of Magic. Clean Code Enthusiast. Gratfeul and excited to be at the beginning of my technical writing journey.

Next Post

Previous Post

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

© 2024 JVM Advent | Powered by steinhauer.software Logosteinhauer.software

Theme by Anders Norén