Chapter 18: jOOQ SPI (Providers and Listeners)

jOOQ provides a lot of hooks that allow us to alter its default behavior at different levels. Among these hooks, we have lightweight settings and configurations, and the heavy-duty, extremely stable Service Provider Interface (SPI) made of generators, providers, listeners, parsers, and so on. So, like any robust and mature technology, jOOQ comes with an impressive SPI dedicated to those corner cases where the core technology cannot help.

In this chapter, we scratch the surface of each of these hooks in order to expose the usage steps and some examples that will help you to understand how to develop your own implementations. Our agenda includes the following:

  • jOOQ settings
  • jOOQ configuration
  • jOOQ providers
  • jOOQ listeners
  • Altering the jOOQ code generation process

Let's get started!

Technical requirements

The code for this chapter can be found on GitHub at https://github.com/PacktPublishing/jOOQ-Masterclass/tree/master/Chapter18.

jOOQ settings

jOOQ comes with a comprehensive list of settings (org.jooq.conf.Settings) that attempts to cover the most popular use cases related to rendering the SQL code. These settings are available declaratively (via jooq-settings.xml in the classpath) or programmatically via methods such as setFooSetting() or withFooSetting(), which can be chained in a fluent style. To take effect, Settings must be part of org.jooq.Configuration, and this can be done in multiple ways, as you can read in the jOOQ manual at https://www.jooq.org/doc/latest/manual/sql-building/dsl-context/custom-settings/. But most probably, in a Spring Boot application, you'll prefer one of the following approaches:

Pass global Settings to the default Configuration via jooq-settings.xml in the classpath (the DSLContext prepared by Spring Boot will take advantage of these settings):

<?xml version="1.0" encoding="UTF-8"?>
<settings>
  <renderCatalog>false</renderCatalog>
  <renderSchema>false</renderSchema>
  <!-- more settings added here -->
</settings>

Pass global Settings to the default Configuration via an @Bean (the DSLContext prepared by Spring Boot will take advantage of these settings):

@org.springframework.context.annotation.Configuration
public class JooqConfig {
  @Bean
  public Settings jooqSettings() {
    return new Settings()
      .withRenderSchema(Boolean.FALSE) // this is a setting
      ... // more settings added here
   }  
  ...
}

At some point, set a new global Settings that will be applied from this point onward (this is a global Settings because we use Configuration#set()):

ctx.configuration().set(new Settings()
   .withMaxRows(5)
   ... // more settings added here
   ).dsl()
   . // some query

Append new global settings to the current global Settings:

ctx.configuration().settings()
   .withRenderKeywordCase(RenderKeywordCase.UPPER); 
ctx. // some query

You can practice these examples in GlobalSettings for MySQL.

At some point, set a new local Settings that will be applied only to the current query (this is a local Settings because we use Configuration#derive()):

ctx.configuration().derive(new Settings()
   .withMaxRows(5)
   ... // more settings added here
   ).dsl()
   . // some query

Or, setting a global/local setting and appends to it more local settings:

ctx.configuration().settings()
   .withRenderMapping(new RenderMapping()
      .withSchemata(
         new MappedSchema()
            .withInput("classicmodels")
            .withOutput("classicmodels_test")));                
// 'derivedCtx' inherits settings of 'ctx'
DSLContext derivedCtx = ctx.configuration().derive(
    ctx.settings() // using here new Settings() will NOT 
                   // inherit 'ctx' settings
       .withRenderKeywordCase(RenderKeywordCase.UPPER)).dsl();

You can practice this example in LocalSettings for MySQL. It is highly recommended to reserve some time and at least to briefly scroll the entire list of jOOQ-supported settings at https://www.jooq.org/javadoc/latest/org.jooq/org/jooq/conf/Settings.html. Next, let's talk about jOOQ Configuration.

jOOQ Configuration

org.jooq.Configuration represents the spine of DSLContext. DSLContext needs the precious information provided by Configuration for query rendering and execution. While Configuration takes advantage of Settings (as you just saw), it also has a lot more other configurations that can be specified as in the examples from this section.

By default, Spring Boot gives us a DSLContext built on the default Configuration (the Configuration accessible via ctx.configuration()), and as you know, while providing custom settings and configurations, we can alter this Configuration globally via set() or locally by creating a derived one via derive().

But, in some scenarios, for instance, when you build custom providers or listeners, you'll prefer to build the Configuration to be aware of your artifacts right from the start instead of extracting it from DSLContext. In other words, when DSLContext is built, it should use the ready-to-go Configuration.

Before Spring Boot 2.5.0, this step required a little bit of effort, as you can see here:

@org.springframework.context.annotation.Configuration
public class JooqConfig {       
  @Bean
  @ConditionalOnMissingBean(org.jooq.Configuration.class)
  public DefaultConfiguration jooqConfiguration(
       JooqProperties properties, DataSource ds, 
       ConnectionProvider cp, TransactionProvider tp) {
    final DefaultConfiguration defaultConfig = 
      new DefaultConfiguration();
    defaultConfig               
     .set(cp)                                  // must have
     .set(properties.determineSqlDialect(ds))  // must have
     .set(tp) // for using SpringTransactionProvider
     .set(new Settings().withRenderKeywordCase(
          RenderKeywordCase.UPPER)); // optional
       // more configs ...
    return defaultConfig;
}

This is a Configuration created from scratch (actually from the jOOQ built-in DefaultConfiguration) that will be used by Spring Boot to create the DSLContext. At a minimum, we need to specify a ConnectionProvider and the SQL dialect. Optionally, if we want to use SpringTransactionProvider as the default provider for jOOQ transactions, then we need to set it as in this code. After this minimum configuration, you can continue adding your settings, providers, listeners, and so on. You can practice this example in Before250Config for MySQL.

Starting with version 2.5.0, Spring Boot facilitates easier customization of jOOQ's DefaultConfiguration via a bean that implements a functional interface named DefaultConfigurationCustomizer. This acts as a callback and can be used as in the following example:

@org.springframework.context.annotation.Configuration
public class JooqConfig 
     implements DefaultConfigurationCustomizer {
  @Override
  public void customize(DefaultConfiguration configuration) {
     configuration.set(new Settings()
       .withRenderKeywordCase(RenderKeywordCase.UPPER)); 
       ... // more configs
    }
}

This is more practical because we can add only what we need. You can check out this example in After250Config for MySQL. Next, let's talk about jOOQ providers.

jOOQ providers

The jOOQ SPI exposes a suite of providers such as TransactionProvider, RecordMapperProvider, ConverterProvider, and so on. Their overall goal is simple—to provide some feature that is not provided by the jOOQ default providers. For instance, let's check out TransactionProvider.

TransactionProvider

For instance, we know that jOOQ transactions are backed in Spring Boot by a transaction provider named SpringTransactionProvider (the Spring Boot built-in implementation of jOOQ's TransactionProvider) that exposes by default a read-write transaction with no name (null), having the propagation set to PROPAGATION_NESTED and the isolation level to the default isolation level of the underlying database, ISOLATION_DEFAULT.

Now, let's assume that we implement a module of our application that serves only reports via jOOQ transactions (so we don't use @Transactional). In such a module, we don't want to allow writing, we want to run each query in a separate/new transaction with a timeout of 1 second, and we want to avoid the dirty reads phenomenon (a transaction reads the uncommitted modifications of another concurrent transaction that rolls back in the end). In other words, we need to provide a read-only transaction having the propagation set to PROPAGATION_REQUIRES_NEW, the isolation level set to ISOLATION_READ_COMMITTED, and the timeout set to 1 second.  

To obtain such a transaction, we can implement a TransactionProvider and override the begin() method as in the following code:

public class MyTransactionProvider 
        implements TransactionProvider {
  private final PlatformTransactionManager transactionManager;
  public MyTransactionProvider(
       PlatformTransactionManager transactionManager) {
   this.transactionManager = transactionManager;
 }
 @Override
 public void begin(TransactionContext context) {
  DefaultTransactionDefinition definition = 
   new DefaultTransactionDefinition(
    TransactionDefinition.PROPAGATION_REQUIRES_NEW);
  definition.setIsolationLevel(
   TransactionDefinition.ISOLATION_READ_COMMITTED);
  definition.setName("TRANSACTION_" + Math.round(1000));
  definition.setReadOnly(true);
  definition.setTimeout(1);
  TransactionStatus status =    
   this.transactionManager.getTransaction(definition);
  context.transaction(new SpringTransaction(status));
 }
 ...
}

Once we have the transaction provider, we have to configure it in jOOQ. Assuming that we are using Spring Boot 2.5.0+, and based on the previous section, this can be done as follows:

@org.springframework.context.annotation.Configuration
public class JooqConfig 
       implements DefaultConfigurationCustomizer {
  private final PlatformTransactionManager txManager;
  public JooqConfig(PlatformTransactionManager txManager) {
   this.txManager = txManager;
  }        
  @Override
  public void customize(DefaultConfiguration configuration) {
   configuration.set(newMyTransactionProvider(txManager));
  }
}

You can practice this example in A250MyTransactionProvider for MySQL. When you run the application, you'll notice at the console that the created transaction has these coordinates: Creating new transaction with name [TRANSACTION_1000]: PROPAGATION_REQUIRES_NEW, ISOLATION_READ_COMMITTED, timeout_1, readOnly.

If you are using a Spring Boot version prior to 2.5.0, then check out the application named B250MyTransactionProvider for MySQL.

And, of course, you can configure the provider via DSLContext as well:

ctx.configuration().set(
   new MyTransactionProvider(txManager)).dsl() ...;

Or, you can use the following:

ctx.configuration().derive(
  new MyTransactionProvider(txManager)).dsl() ...;

Now, let's consider another scenario solved via ConverterProvider.

ConverterProvider

We have to project some JSON functions and map them hierarchically. We already know that this is no issue in a Spring Boot + jOOQ combo since jOOQ can fetch the JSON and can call Jackson (the default in Spring Boot) to map it accordingly. But, we don't want to use Jackson; we want to use Flexjson (http://flexjson.sourceforge.net/). jOOQ is not aware of this library (jOOQ can detect only the presence of Jackson and Gson), so we need to provide a converter such as org.jooq.ConverterProvider that uses Flexjson to accomplish this task. Take your time to check the source in {A,B}250ConverterProvider for MySQL. Finally, let's focus on this scenario solved via RecordMapperProvider.

RecordMapperProvider

We have a ton of legacy POJOs implemented via the Builder pattern and we decide to write a bunch of jOOQ RecordMapper for mapping queries to these POJOs. In order to streamline the process of using these RecordMapper, we also decide to write a RecordMapperProvider. Basically, this will be responsible for using the proper RecordMapper without our explicit intervention. Are you curious about how to do it? Then check out the {A,B}250RecordMapperProvider and RecordMapperProvider applications for MySQL. Mainly, these applications are the same, but they use different approaches to configure RecordMapperProvider.

With ConverterProvider and RecordMapperProvider, I think it's important to mention that these replace out-of-the-box behavior, they don't enhance it. So, custom providers have to make sure to fall back to the default implementations if they can't handle a conversion/mapping.

jOOQ listeners

jOOQ comes with a significant number of listeners that are quite versatile and useful in hooking us into jOOQ life cycle management for solving a wide range of tasks. Let's "arbitrarily" pick up the mighty ExecuteListener.

ExecuteListener

For instance, one of the listeners that you'll love is org.jooq.ExecuteListener. This listener comes with a bunch of methods that can hook in the life cycle of a Query, Routine, or ResultSet to alter the default rendering, preparing, binding, executing, and fetching stage. The most convenient approach to implement your own listener is to extend the jOOQ default implementation, DefaultExecuteListener. This way, you can override only the methods that you want and you keep up with the SPI evolution (however, by the time you read this book, it is possible that this default listener will have been removed, and all methods are now default methods on the interface). Consider applying this technique to any other jOOQ listener, since jOOQ provides a default implementation for all (mainly, for FooListener, there is a DefaultFooListener).

For now, let's write an ExecuteListener that alters the rendered SQL that is about to be executed. Basically, all we want is to alter every MySQL SELECT by adding the /*+ MAX_EXECUTION_TIME(n) */ hint, which allows us to specify a query timeout in milliseconds. The jOOQ DSL allows for adding MySQL/Oracle-style hints. :) Use ctx.select(...).hint("/*+ ... */").from(...). But only ExecuteListener can patch multiple queries without modifying the queries themselves. So, ExecuteListener exposes callbacks such as renderStart(ExecuteContext) and renderEnd(ExecuteContext), which are called before rendering SQL from QueryPart and after rendering SQL from QueryPart, respectively. Once we are in control, we can rely on ExecuteContext, which gives us access to the underlying connection (ExecuteContext.connection()), query (ExecuteContext.query()), rendered SQL (ExecuteContext.sql()), and so on. In this specific case, we are interested in accessing the rendered SQL and modifying it, so we override renderEnd(ExecuteContext) and call ExecuteContext.sql() as follows:

public class MyExecuteListener extends 
    DefaultExecuteListener{
  private static final Logger logger = 
    Logger.getLogger(MyExecuteListener.class.getName());
  @Override
  public void renderEnd(ExecuteContext ecx) {
    if (ecx.configuration().data()
        .containsKey("timeout_hint_select") &&
                 ecx.query() instanceof Select) {
      String sql = ecx.sql();
      if (sql != null) {
        ecx.sql(sql.replace(
         "select",
         "select " + ecx.configuration().data()
            .get("timeout_hint_select")
      ));
      logger.info(() -> {
        return "Executing modified query : " + ecx.sql();
      });
   }
  }
 }
}

The code from inside the decisional block is quite simple: we just capture the rendered SQL (the SQL that is about to be executed shortly) and modify it accordingly by adding the MySQL hint. But, what is ...data().containsKey("timeout_hint_select")? Mainly, Configuration comes with three methods that work together to pass custom data through Configuration. These methods are data(Object key, Object value), which allows us to set some custom data; data(Object key), which allows us to get some custom data based on a key; and data(), which returns the entire Map of custom data. So, in our code, we check whether the custom data of the current Configuration contains a key named timeout_hint_select (this is a name we have chosen). If such a key exists, it means that we want to add the MySQL hint (which was set as the value corresponding to this key) to the current SELECT, otherwise, we take no action. This piece of custom information was set as follows:

Configuration derived = ctx.configuration().derive();
derived.data("timeout_hint_select", 
             "/*+ MAX_EXECUTION_TIME(5) */");

Once this custom data is set, we can execute a SELECT that will be enriched with the MySQL hint by our custom ExecuteListener:

derived.dsl().select(...).fetch();

You can practice this example in A250ExecuteListener for MySQL. If you are using a Spring Boot version prior to 2.5.0, then go for B250ExecuteListener for MySQL. There is also an application named ExecuteListener for MySQL that does the same thing but it "inlines" ExecuteListener via CallbackExecuteListener (this represents ExecuteListener – useful if you prefer functional composition):

ctx.configuration().derive(new CallbackExecuteListener()
                   .onRenderEnd(ecx -> {
   ...}))
   .dsl()
   .select(...).fetch();

Most listeners have a functional composition approach as well that can be used as in the previous snippet of code. Next, let's talk about a listener named ParseListener.

jOOQ SQL parser and ParseListener

ParseListener (SQL Parser Listener) was introduced in jOOQ 3.15, but before discussing it, we should discuss the SQL Parser (org.jooq.Parser).

SQL Parser

jOOQ comes with a powerful and mature Parser API that is capable of parsing an arbitrary SQL string (or a fragment of it) into different jOOQ API elements. For instance, we have Parser.parseQuery(String sql), which returns the org.jooq.Query type containing a single query that corresponds to the passed sql.

One of the main functionalities of the Parser API is that it can act as a translator between two dialects. In other words, we have SQL in dialect X, and we can programmatically pass it through the SQL Parser to obtain the SQL translated/emulated for dialect Y. For instance, consider a Spring Data JPA application that contains a significant number of native queries written for the MySQL dialect like this one:

@Query(value = "SELECT c.customer_name as customerName, "
  + "d.address_line_first as addressLineFirst, 
     d.address_line_second as addressLineSecond "
  + "FROM customer c JOIN customerdetail d "
  + "ON c.customer_number = d.customer_number "
  + "WHERE (NOT d.address_line_first <=>
    d.address_line_second)", nativeQuery=true)
    List<SimpleCustomer> fetchCustomerNotSameAddress();

The idea is that management took the decision to switch to PostgreSQL, so you should migrate all these queries to the PostgreSQL dialect and you should do it with insignificant downtime. Even if you are familiar with the differences between these two dialects and you don't have a problem expressing both, you are still under time pressure. This is a scenario where jOOQ can save you because all you have to do is to pass to the jOOQ Parser your native queries and jOOQ will translate/emulate them for PostgreSQL. Assuming that you are using Spring Data JPA backed by Hibernate, then all you need to do is to add a Hibernate interceptor that exposes the SQL string that is about to execute:

@Configuration
public class SqlInspector implements StatementInspector {
  @Override
  public String inspect(String sql) {
    Query query = DSL.using(SQLDialect.POSTGRES)
      .parser()
      .parseQuery(sql);
    if (query != null) {
        return query.getSQL();
    }
    return null; // interpreted as the default SQL string
  }
}

Done in 5 minutes! How cool is that?! Obviously, your colleagues will ask you what sorcery this was, so you have a good opportunity to introduce them to jOOQ. :)

If you check out the console output, you'll see that Hibernate reports the following SQL string to be executed against the PostgreSQL database:

SELECT c.customer_name AS customername,
       d.address_line_first AS addresslinefirst,
       d.address_line_second AS addresslinesecond
FROM customer AS c
JOIN customerdetail AS d 
  ON c.customer_number = d.customer_number
WHERE NOT (d.address_line_first IS NOT DISTINCT
           FROM d.address_line_second)

Of course, you can change the dialect and obtain the SQL for any of the jOOQ-supported dialects. Now, you have time to copy the jOOQ output and replace your native queries accordingly since the application continues to run as usual. At the end, simply decouple this interceptor. You can practice this application in JPAParser.

Besides parseQuery(), we have parseName(String sql), which parses the given sql into org.jooq.Name; parseField(String sql), which parses the given sql into org.jooq.Field; parseCondition(String sql), which parses the given sql into org.jooq.Condition; and so on. Please check out the jOOQ documentation to see all the methods and their flavors.

But jOOQ can do even more via the so-called parsing connection feature (available for R2DBC as well). Basically, this means that the SQL string is passed through the jOOQ Parser and the output SQL can become the source of a java.sql.PreparedStatement or java.sql.Statement, which can be executed via these JDBC APIs (executeQuery(String sql)). This happens as long as the SQL string comes through a JDBC connection (java.sql.Connection) that is obtained as in this example:

There's no way from syntax alone to decide which input semantics it could be:

try (Connection c = DSL.using(url, user, pass)
      .configuration()
      .set(new Settings()
          .withParseDialect(SQLDialect.MYSQL)) 
      .dsl()
      .parsingConnection();  // this does the trick  
      PreparedStatement ps = c.prepareStatement(sql);
    ) {
     ...
}

The sql passed to the PreparedStatement represents any SQL string. For instance, it can be produced by JdbcTemplate, the Criteria API, EntityManager, and so on. Gathering the SQL string from the Criteria API and EntityManager can be a little bit tricky (since it requires a Hibernate AbstractProducedQuery action) but you can find the complete solution in JPAParsingConnection for MySQL.

Besides the Parser API, jOOQ also exposes a translator between dialects via the Parser CLI (https://www.jooq.org/doc/latest/manual/sql-building/sql-parser/sql-parser-cli/) and via this website: . Now, we can talk about ParseListener.

SQL Parser Listener

It is quite easy to intuit that the SQL Parser Listener (org.jooq.ParseListener introduced in jOOQ 3.15) is responsible for providing hooks that allow altering the default behavior of the jOOQ parser.

For instance, let's consider the following SELECT, which uses the SQL CONCAT_WS(separator, str1, str2, ...) function:

SELECT concat_ws('|', city, address_line_first, 
  address_line_second, country, territory) AS address 
FROM office

This variadic function that ignores NULL values and uses a string separator/delimiter to separate all arguments concatenated in the resulting string is natively supported by MySQL, PostgreSQL, and SQL Server but is not supported by Oracle. Moreover, jOOQ (at least until version 3.16.4) doesn't support it either. One way to use it in our queries is via plain SQL as follows:

ctx.resultQuery("SELECT concat_ws('|', city, 
  address_line_first, address_line_second, country, territory) 
AS address FROM office").fetch();

But, if we try to execute this query against Oracle, it will not work since Oracle doesn't support it and jOOQ doesn't emulate it in Oracle syntax. A solution consists of implementing our own ParseListener that can emulate the CONCAT_WS() effect. For instance, the following ParseListener accomplishes this via the NVL2() function (please read all comments in the code in order to get you familiar with this API):

public class MyParseListener extends DefaultParseListener {
 @Override
 public Field parseField(ParseContext pcx) {
  if (pcx.parseFunctionNameIf("CONCAT_WS")) {
   pcx.parse('(');
   String separator = pcx.parseStringLiteral();            
   pcx.parse(',');
   // extract the variadic list of fields
   List<Field<?>> fields = pcx.parseList(",", 
       c -> c.parseField()); 
   pcx.parse(')'); // the function CONCAT_WS() was parsed      
   ...

After parsing, we prepare the Oracle emulation:

   ...
   // prepare the Oracle emulation
   return CustomField.of("", SQLDataType.VARCHAR, f -> {
    switch (f.family()) {
     case ORACLE -> {
      Field result = inline("");
      for (Field<?> field : fields) {
       result = result.concat(DSL.nvl2(field,
                  inline(separator).concat(
                    field.coerce(String.class)), field));
      }
      f.visit(result); // visit this QueryPart
     }
     // case other dialect ...    
     }
   });
  }
  // pass control to jOOQ
  return null;
 }
}

To keep the code simple and short, we have considered some assumptions. Mainly, the separator and string literals should be enclosed in single quotes, the separator itself is a single character, and it should be at least one argument after the separator.

This time, when we do this:

String sql = ctx.configuration().derive(SQLDialect.ORACLE)
  .dsl()
  .render(ctx.parser().parseQuery("""
   SELECT concat_ws('|', city, address_line_first,  
     address_line_second, country, territory) AS address 
   FROM office"""));
ctx.resultQuery(sql).fetch();

Our parser (followed by the jOOQ parser) produces this SQL compatible with Oracle syntax:

SELECT ((((('' || nvl2(CITY, ('|' || CITY), CITY)) || 
  nvl2(ADDRESS_LINE_FIRST, ('|' || ADDRESS_LINE_FIRST),   
       ADDRESS_LINE_FIRST)) || 
  nvl2(ADDRESS_LINE_SECOND, ('|' || ADDRESS_LINE_SECOND), 
       ADDRESS_LINE_SECOND)) || 
  nvl2(COUNTRY, ('|' || COUNTRY), COUNTRY)) || 
  nvl2(TERRITORY, ('|' || TERRITORY), TERRITORY)) ADDRESS
FROM OFFICE

You can practice this example in A250ParseListener for Oracle (for Spring Boot 2.5.0+), and in B250ParseListener for Oracle (for Spring Boot prior 2.5.0). Besides parsing fields (Field), ParseListener can also parse tables (org.jooq.Table via parseTable()) and conditions (org.jooq.Condition via parseCondition()).

If you prefer functional composition, then check out CallbackParseListener. Next, let's quickly cover other jOOQ listeners.

RecordListener

Via the jOOQ RecordListener implementations, we can add custom behavior during UpdatableRecord events such as insert, update, delete, store, and refresh (if you are not familiar with UpdatableRecord, then consider Chapter 3, jOOQ Core Concepts).

For each event listen by RecordListener we have an eventStart() and eventEnd() method. eventStart() is a callback invoked before the event takes place, while the eventEnd() callback is invoked after the event has happened.

For instance, let's consider that every time an EmployeeRecord is inserted, we have an algorithm that generates the primary key, EMPLOYEE_NUMBER. Next, the EXTENSION field is always of type xEmployee_number (for instance, if EMPLOYEE_NUMBER is 9887 then EXTENSION is x9887). Since we don't want to let people do this task manually, we can easily automate this process via RecordListener as follows:

public class MyRecordListener extends DefaultRecordListener {
 @Override
 public void insertStart(RecordContext rcx) {
  if (rcx.record() instanceof EmployeeRecord employee) {
   // call the secret algorithm that produces the PK
   long secretNumber = (long) (10000 * Math.random());
   employee.setEmployeeNumber(secretNumber);
   employee.setExtension("x" + secretNumber);
  }
 }
} 

Probably worth mentioning, RecordListener doesn't apply to ordinary DML statements (let alone plain SQL templates). People often think they can add some security stuff in there, which is then bypassed. It really only works on TableRecord/UpdatableRecord types. Starting from jOOQ 3.16, a lot of tasks that are currently solved with RecordListener are probably better solved with VisitListener, which will become *much* more powerful once the new query object model is in place (https://blog.jooq.org/traversing-jooq-expression-trees-with-the-new-traverser-api/). In jOOQ 3.16, it won't be ready for this task yet, but it might be in jOOQ 3.17.

You can practice this application in {A,B}250RecordListener1 for MySQL. Moreover, you can find the {A,B}250RecordListener2 application for MySQL, which extends this one by overriding insertEnd() to automatically insert a row in EMPLOYEE_STATUS based on the inserted EmployeeRecord:

@Override
public void insertEnd(RecordContext rcx) {
  if (rcx.record() instanceof EmployeeRecord employee) {
   EmployeeStatusRecord status = 
      rcx.dsl().newRecord(EMPLOYEE_STATUS);
   status.setEmployeeNumber(employee.getEmployeeNumber());
   status.setStatus("REGULAR");
   status.setAcquiredDate(LocalDate.now());
   status.insert();
 }        
}        

If you prefer functional composition, then check out CallbackRecordListener.

DiagnosticsListener

DiagnosticsListener is available from jOOQ 3.11 and it fits perfectly in scenarios where you want to detect inefficiencies in your database interaction. This listener can act at different levels, such as jOOQ, JDBC, and SQL levels.

Mainly, this listener exposes a suite of callbacks (one callback per problem it detects). For instance, we have repeatedStatements() for detecting N+1 problems, tooManyColumnsFetched() for detecting whether ResultSet fetches more columns than necessary, tooManyRowsFetched() for detecting whether ResultSet fetches more rows than necessary, and so on (you can find all of them in the documentation).

Let's assume a Spring Data JPA application that runs the following classical N+1 scenario (the Productline and Product entities are involved in a lazy bidirectional @OneToMany relationship):

@Transactional(readOnly = true)
public void fetchProductlinesAndProducts() {
  List<Productline> productlines 
   = productlineRepository.findAll();
  for (Productline : productlines) {
   List<Product> products = productline.getProducts();
   System.out.println("Productline: " 
    + productline.getProductLine()
    + " Products: " + products);
  }
}

So, there is a SELECT triggered for fetching the product lines, and for each product line, there is a SELECT for fetching its products. Obviously, in performance terms, this is not efficient, and jOOQ can signal this via a custom DiagnosticsListener as shown next:

public class MyDiagnosticsListener 
         extends DefaultDiagnosticsListener {    
 private static final Logger = ...;   
 @Override
 public void repeatedStatements(DiagnosticsContext dcx) {
  log.warning(() ->
   "These queries are prone to be a N+1 case: 
" 
     + dcx.repeatedStatements());        
 }
}

Now, the previous N+1 case will be logged, so you have been warned!

jOOQ can diagnose over a java.sql.Connection (diagnosticsConnection()) or a javax.sql.DataSource (diagnosticsDataSource() wraps a java.sql.Connection in a DataSource). Exactly as in the case of a parsing connection, this JDBC connection proxies the underlying connection, therefore you have to pass your SQL through this proxy. In a Spring Data JPA application, you can quickly improvise a diagnose profile that relies on a SingleConnectionDataSource, as you can see in JPADiagnosticsListener for MySQL. The same case is available in SDJDBCDiagnosticsListener for MySQL, which wraps a Spring Data JDBC application. Also, the jOOQ manual has some cool JDBC examples that you should check (https://www.jooq.org/doc/latest/manual/sql-execution/diagnostics/).

TransactionListener

As its name suggests, TransactionListener provides hooks for interfering with transaction events such as begin, commit, and rollback. For each such event, there is an eventBegin(), called before the event, and an eventEnd(), called after the event. Moreover, for functional composition purposes, there is CallbackTransactionListener.

Let's consider a scenario that requires us to back up the data after each update of EmployeeRecord. By "back up," we understand that we need to save an INSERT containing the data before this update in the file corresponding to the employee to be updated.

TransactionListener doesn't expose information about the underlying SQL, therefore we cannot determine whether EmployeeRecord is updated or not from inside of this listener. But, we can do it from RecordListener and the updateStart() callback. When an UPDATE occurs, updateStart() is called and we can inspect the record type. If it is an EmployeeRecord, we can store its original (original()) state via data() as follows:

@Override
public void updateStart(RecordContext rcx) {
  if (rcx.record() instanceof EmployeeRecord) {
   EmployeeRecord employee = 
    (EmployeeRecord) rcx.record().original();
  rcx.configuration().data("employee", employee);
  }
}

Now, you may think that, at the update end (updateEnd()), we can write the EmployeeRecord original state in the proper file. But a transaction can be rolled back, and in such a case, we should roll back the entry from the file as well. Obviously, this is cumbersome. It will be much easier to alter the file only after the transaction commits, so when we are sure that the update succeeded. Here is where TransactionListener and commitEnd() become useful:

public class MyTransactionListener 
      extends DefaultTransactionListener {
 @Override
 public void commitEnd(TransactionContext tcx) {
  EmployeeRecord employee = 
    (EmployeeRecord) tcx.configuration().data("employee");
  if (employee != null) {
    // write to file corresponding to this employee
  }
 }
}

Cool, right!? You just saw how to combine two listeners to accomplish a common task. Check the source in {A,B}250RecordTransactionListener for MySQL.

VisitListener

The last listener that we'll briefly cover is probably the most complex one, VisitListener. Mainly, VisitListener is a listener that allows us to manipulate the jOOQ Abstract Syntax Tree (AST), which contains query parts (QueryPart) and clauses (Clause). So, we can visit QueryPart (via visitStart() and visitEnd()) and Clause (via clauseStart() and clauseEnd()).

A very simple example could be like this: we want to create some views via the jOOQ DSL (ctx.createOrReplaceView("product_view").as(...).execute()) and we also want to add them to the WITH CHECK OPTION clause. Since the jOOQ DSL doesn't support this clause, we can do it via VisitListener as follows:

public class MyVisitListener extends DefaultVisitListener {
 @Override
 public void clauseEnd(VisitContext vcx) {
  if (vcx.clause().equals(CREATE_VIEW_AS)) {
    vcx.context().formatSeparator()
       .sql("WITH CHECK OPTION");
  }
 }
}

While you can practice this trivial example in {A,B}250VisitListener for MySQL, I strongly recommend you read these two awesome articles from the jOOQ blog as well: https://blog.jooq.org/implementing-client-side-row-level-security-with-jooq/ and https://blog.jooq.org/jooq-internals-pushing-up-sql-fragments/. You'll have the chance to learn a lot about the VisitListener API. You never know when you'll need it! For instance, you may want to implement your soft deletes mechanism, add a condition for each query, and so on. In such scenarios, VisitListener is exactly what are you looking for! Moreover, when this book was written, jOOQ started to add a new player, called Query Object Model (QOM), as a public API. This API facilitates an easy, intuitive, and powerful traversal of the jOOQ AST. You don't want to miss this article: https://blog.jooq.org/traversing-jooq-expression-trees-with-the-new-traverser-api/.

Next, let's talk about altering the jOOQ code generation process.

Altering the jOOQ code generation process

We already know that jOOQ comes with three Code Generators (for Java, Scala, and Kotlin). For Java, we use org.jooq.codegen.JavaGenerator, which can be shaped/customized declaratively (or, programmatically) via a comprehensive set of configurations grouped under <configuration> (Maven), configurations (Gradle), or org.jooq.meta.jaxb.Configuration. But, sometimes, we need more control, or in other words, we need a custom generator implementation.

Implementing a custom generator

Imagine a scenario where we need a query method and it would be very handy if it was provided by the built-in jOOQ DAO. Obviously, the jOOQ goal is to maintain a thin DAO layer that avoids a large number of methods caused by different types of query combinations (don't expect to see in the default DAO a query method such as fetchByField1AndField2() since trying to cover all combinations of fields (even for only two fields) leads to a heavy DAO layer that most probably will not be fully exploited).

But, we can enrich the generated DAOs via a custom generator. An important aspect is the fact that a custom generator requires a separate project (or module) that will work as a dependency for the project that is going to use it. This is needed because the generator must run at compilation time, so the way to achieve this is by adding it as a dependency. Since we use a multi-module Spring Boot application, we can easily achieve this by adding the custom generator as a separate module of the project. This is very handy since most Spring Boot production apps are developed in multi-module style.

Speaking about the effective implementation of a custom generator, we have to extend the Java generator, org.jooq.codegen.JavaGenerator, and override the default-empty method, generateDaoClassFooter(TableDefinition table, JavaWriter out). The stub code is listed next:

public class CustomJavaGenerator extends JavaGenerator {
   @Override
   protected void generateDaoClassFooter(
         TableDefinition table, JavaWriter out) {
      ...
   }
}

Based on this stub code, let's generate additional DAO query methods.

Adding a query method to all DAOs

Let's assume that we want to add a query method to all the generated DAOs, for instance, a method that limits the number of fetched POJOs (records), such as List<POJO> findLimitedTo(Integer value), where value represents the number of POJOs to fetch in the List. Check out the code:

01:@Override
02:protected void generateDaoClassFooter(
03:            TableDefinition table, JavaWriter out) {
04:
05:   final String pType = 
06:    getStrategy().getFullJavaClassName(table, Mode.POJO);
07:
08:   // add a method common to all DAOs
09:   out.tab(1).javadoc("Fetch the number of records 
10:                limited by <code>value</code>");
11:   out.tab(1).println("public %s<%s> findLimitedTo(
12:         %s value) {", List.class, pType, Integer.class);
13:   out.tab(2).println("return ctx().selectFrom(%s)",  
14:              getStrategy().getFullJavaIdentifier(table));
15:   out.tab(3).println(".limit(value)");
16:   out.tab(3).println(".fetch(mapper());");
17:   out.tab(1).println("}");
18:}

Let's quickly see what is happening here:

  • In line 5, we ask jOOQ to give the name of the generated POJO that corresponds to the current table and that is used in our query method to return a List<POJO>. For instance, for the ORDER table, getFullJavaClassName() returns jooq.generated.tables.pojos.Order.
  • In line 9, we generate some Javadoc.
  • In lines 11-17, we generate the method signature and its body. The getFullJavaIdentifier() used at line 14 gives us the fully qualified name of the current table (for example, jooq.generated.tables.Order.ORDER).
  • The ctx() method used on line 13 and mapper() used in line 16 are defined in the org.jooq.impl.DAOImpl class. Each generated DAO extends DAOImpl, and therefore has access to these methods.

Based on this code, the jOOQ generator adds at the end of each generated DAO a method as follows (this method is added in OrderRepository):

/**
 * Fetch the number of records limited by <code>value</code>
 */
public List<jooq.generated.tables.pojos.Order>
                    findLimitedTo(Integer value) {
   return ctx().selectFrom(jooq.generated.tables.Order.ORDER)
               .limit(value)
               .fetch(mapper());
}

How about adding methods only in certain DAOs?

Adding a query method in certain DAOs

Let's add a query method named findOrderByStatusAndOrderDate() only in the OrderRepository DAO. A simple and quick solution consists of checking the table name via the TableDefinition argument of the generateDaoClassFooter() method. For instance, the following code adds the findOrderByStatusAndOrderDate() method only in the DAO that corresponds to the ORDER table:

@Override
protected void generateDaoClassFooter(
           TableDefinition table, JavaWriter out) {
   final String pType 
      = getStrategy().getFullJavaClassName(table, Mode.POJO);
   // add a method specific to Order DAO
   if (table.getName().equals("order")) {
      out.println("public %s<%s>
         findOrderByStatusAndOrderDate(
            %s statusVal, %s orderDateVal) {",
               List.class, pType, 
                    String.class, LocalDate.class);
      ...
   }
}

This code generates findOrderByStatusAndOrderDate() only in jooq.generated.tables.daos.OrderRepository:

/**
 * Fetch orders having status <code>statusVal</code>
 *  and order date after <code>orderDateVal</code>
 */
public List<jooq.generated.tables.pojos.Order>
      findOrderByStatusAndOrderDate(String statusVal,    
         LocalDate orderDateVal) {
   return ctx().selectFrom(jooq.generated.tables.Order.ORDER)
      .where(jooq.generated.tables.Order.ORDER.STATUS
         .eq(statusVal))
         .and(jooq.generated.tables.Order.ORDER.ORDER_DATE
            .ge(orderDateVal))
      .fetch(mapper());
}

Besides table.getName(), you can enforce the previous condition for more control via table.getCatalog(), table.getQualifiedName(),table.getSchema(), and so on.

The complete example is available in AddDAOMethods for MySQL and Oracle.

As a bonus, if you need to enrich the jOOQ-generated DAOs with the corresponding interfaces, then you need a custom generator as in the application named InterfacesDao for MySQL and Oracle. If you check out this code, you'll see a so-called custom generator strategy. Next, let's detail this aspect.

Writing a custom generator strategy

You already know how to use <strategy> (Maven), strategy {} (Gradle), or withStrategy() (programmatic) to inject custom behavior for naming classes, methods, members, and so on during the jOOQ code generation process. For instance, we have used this technique for renaming our DAO classes in Spring Data JPA style.

But, overriding naming schemes during code generation can be accomplished via a custom generator strategy as well. For instance, this is useful when we want to generate certain method names as in our scenario that starts from the following query:

ctx.select(concat(EMPLOYEE.FIRST_NAME, inline(" "),        
        EMPLOYEE.LAST_NAME).as("employee"),
        concat(EMPLOYEE.employee().FIRST_NAME, inline(" "), 
               EMPLOYEE.employee().LAST_NAME).as("reports_to"))
   .from(EMPLOYEE)
   .where(EMPLOYEE.JOB_TITLE.eq(
          EMPLOYEE.employee().JOB_TITLE))
   .fetch();

This is a self-join that relies on the employee() navigation method. Conforming to the default generator strategy, writing a self-join is done via a navigation method having the same name as the table itself (for the EMPLOYEE table, we have the employee() method).

But, if you find EMPLOYEE.employee() a little bit confusing, and you prefer something more meaningful, such as EMPLOYEE.reportsTo() (or something else), then you need a custom generator strategy. This can be accomplished by extending the jOOQ DefaultGeneratorStrategy and overriding the proper methods described in the jOOQ manual: https://www.jooq.org/doc/latest/manual/code-generation/codegen-generatorstrategy/.

So, in our case, we need to override getJavaMethodName() as follows:

public class MyGeneratorStrategy 
        extends DefaultGeneratorStrategy {
  @Override
  public String getJavaMethodName(
        Definition, Mode mode) {
   if (definition.getQualifiedName()
         .equals("classicmodels.employee") 
            && mode.equals(Mode.DEFAULT)) {
       return "reportsTo";
   }
   return super.getJavaMethodName(definition, mode);
  }
}

Finally, we have to set this custom generator strategy as follows (here, for Maven, but you can easily intuit how to do it for Gradle or programmatically):

<generator>
 <strategy>
  <name>
   com.classicmodels.strategy.MyGeneratorStrategy
  </name>
 </strategy>
</generator>

Done! Now, after code generation, you can re-write the previous query as follows (notice the reportsTo() method instead of employee()):

ctx.select(concat(EMPLOYEE.FIRST_NAME, inline(" "), 
         EMPLOYEE.LAST_NAME).as("employee"),
         concat(EMPLOYEE.reportsTo().FIRST_NAME, inline(" "), 
            EMPLOYEE.reportsTo().LAST_NAME).as("reports_to"))
   .from(EMPLOYEE)
   .where(EMPLOYEE.JOB_TITLE.eq(gma
          EMPLOYEE.reportsTo().JOB_TITLE))
   .fetch();

The jOOQ Java default generator strategy follows the Pascal naming strategy, which is the most popular in the Java language. But, besides the Pascal naming strategy, jOOQ also comes with a KeepNamesGeneratorStrategy custom generator strategy that simply holds names in place. Moreover, you may like to study JPrefixGeneratorStrategy, respectively the JVMArgsGeneratorStrategy. These are just some examples (they are not part of the jOOQ Code Generator) that can be found on GitHub at https://github.com/jOOQ/jOOQ/tree/main/jOOQ-codegen/src/main/java/org/jooq/codegen/example.

Summary

In this chapter, we have briefly covered the jOOQ SPI. Obviously, the tasks solved via an SPI are not daily tasks and require overall solid knowledge about the underlying technology. But, since you have read earlier chapters in this book, you should have no problems assimilating the knowledge in this chapter as well. But, of course, using this SPI to solve real problems requires more study of the documentation and more practice.

In the next chapter, we tackle logging and testing jOOQ applications.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset