Wednesday, August 6, 2025

[Tips] Spring Boot - Check Out Spring MVC through Implementing a Simple Function

Last updated on:

 

When it comes to Spring MVC, the well-known MVC architecture diagram comes to our mind. It helps us understand how the parts, DispatcherServlet, Controller, Handler, ViewResolver etc, participate into the process of responding to a web request. With Spring Boot, which is characterized with auto-configuration, for example, the web.xml file where we used to configure those web components even is not generated in a Spring Boot Starter project, how does the MVC model work?  

We’re going to check it out by walking through the following example. What will we do? Fetch all the records in the Employees table and display them on a web page. To realize this, we’ll build a Spring web app with a view, a controller, and a model.


Records in Employees


Display on the view


This sample project is available on GitHub.



Prerequisite


Spring Tool Suite is a must have. Assume Eclipse is the development tool. You can go to Eclipse Marketplace, search and get it installed.

The table, Employees, has been created in the database. Here we’re going to use Oracle XE.


SpringMVC project/schema.sql
-- DDL
CREATE TABLE Employees (
   Employee_Id NUMBER(6)
 , First_Name VARCHAR2(20)
 , Last_Name VARCHAR2(20)
 , Salary NUMBER(8)
 , Hire_Date CHAR(10)
 , Manager_Id NUMBER(6)
 , Department_Id NUMBER(4)
)
 NOLOGGING
 PARALLEL;

-- DML
INSERT ALL
 INTO Employees VALUES(100, 'David', 'OConnell', 4800, '2010/01/01', null, 10)
 INTO Employees VALUES(101, 'Susan', 'Grant', 6500, '2010/12/31', 100, 10)
 INTO Employees VALUES(200, 'Jennifer', 'Whalen', 4400, '2010/01/01', null, 20)
 INTO Employees VALUES(201, 'Bruce', 'Hartstein', 6000, '2010/12/31', 200, 20)
 INTO Employees VALUES(202, 'Pat', 'Fay', 6000, '2010/12/31', 200, 20)
SELECT * FROM DUAL;


Create a Spring Boot Web Project


Let’s start with creating a Spring Boot project. 

Click New button, choose Spring Starter Project, as shown in the screenshot below.



Click Next.

Enter a preferred project name. Click Next. 

On the dependencies tab, check Oracle Driver, Spring Data JDBC, Spring Web,  and Thymeleaf. These dependencies will be added into the pom.xml file automatically by the wizard.



Click Finish.



Dependencies


If you prefer to manually manage the dependencies, make sure the following list of libraries is included in pom.xml. 


SpringMVC project/pom.xml
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-web</artifactId>
	</dependency>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-thymeleaf</artifactId>
	</dependency>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-jdbc</artifactId>
	</dependency>
	<dependency>
		<groupId>com.oracle.database.jdbc</groupId>
		<artifactId>ojdbc11</artifactId>
		<scope>runtime</scope>
	</dependency>


Configure a Data Source


We’ll let Spring Boot to configure the default data source, so we only put down the connection strings in the application property file. This is what they look like.


SpringMVC project/application.properties
spring.config.activate.on-profile=dev
spring.datasource.driver-class-name=oracle.jdbc.OracleDriver
spring.datasource.url=jdbc:oracle:thin:@localhost:1521/xepdb1
spring.datasource.username=user
spring.datasource.password=password


Data class


The data retrieved from the Employees table is stored in employee POJOs in the memory. Here is how the class is defined.


SpringMVC project/EmpDto.java
public class EmpDto {
	private Integer employee_id;
	private String first_name;
	private String last_name;
	private Double salary;
	private String hire_date;	
	private Integer manager_id;
	private Long department_id;
	/*
	 * Constructors, Getters, Setters etc.
	 */
	…
}


Controller


A HTTP request eventually comes down to a handler method, where it is processed. The following annotations are used to declare controllers and handler methods.

  • @Controller – declaring a controller
  • @GetMapping – mapping a HTTP GET request to a method (@PostMapping, @PutMapping, @PatchMapping and @DeleteMapping work for the other requests.)


Here’s how the controller and the handler method are defined in this example.

The handler returns a ModelAndView object to the client bearing a logical view name, ”employees”, and an object(Model), a collection of employee POJOs, which is identified by “employees”.


SpringMVC project/EmpController.java
@Controller
public class EmpController{
	
	@Autowired
	private EmpService empService;
		
	@GetMapping(AppConstant.PATH_EMP_FINDALL)
	public ModelAndView findAllEmps(ModelAndView mav) {
		List<EmpDto> emplist = empService.findAll();
		
		mav.setViewName(AppConstant.VIEW_EMPLOYEE);
		mav.addObject("employees", emplist);
	    
		return mav;
	}
}

The constant strings used in the preceding code block are defined as follows.


SpringMVC project/AppConstant.java
public final class AppConstant {
	public static final String VIEW_EMPLOYEE = "employees";
	public static final String PATH_EMP_FINDALL = "/findallemps";
}


Model


Model encapsulates the data. Data doesn’t come from nowhere; therefore, we’d look at it in a broader context here. How the data is processed behind, e.g., business logics involved, is treated as a part of model, conceptually. 

In this project, there is  one more layer in between the service layer and the DAO. For the purpose of demo, we bypass the business logic layer and access the DAO directly from the EmpService bean, so, we can make the program easier to read. 


Service Bean:


SpringMVC project/EmpService.java
@Service
public class EmpService{
	
	@Autowired
	EmpDao empDao;
	
	public List<EmpDto> findAll() {
		return empDao.findAll();
	}
}

DAO:

JDBC is employed to access data in the database. As Spring Boot takes care of the configuration, we can autowire a JdbcTemplate and use it in our DAO.


SpringMVC project/EmpDaoImpl.java
@Repository
public class EmpDaoImpl implements EmpDao {
	@Autowired
	JdbcTemplate jdbcTemplate;
	
	public List<EmpDto> findAll(){
		String sql = "SELECT employee_id, first_name, last_name, salary, "
			+ "hire_date, manager_id, department_id "
			+ "FROM employees "
			+ "ORDER BY employee_id";
        
		return jdbcTemplate.query(sql, new RowMapper<EmpDto>() {
			@Override
			public EmpDto mapRow(ResultSet rs, int rowNum) throws SQLException {
				EmpDto empDto = new EmpDto(
					rs.getInt("employee_id"),
					rs.getString("first_name"),
					rs.getString("last_name"),
					rs.getDouble("salary"),
					rs.getString("hire_date"),
					rs.getInt("manager_id"),
					rs.getLong("department_id"));

				return empDto;
			}
		});	
	};
}


View


The view, employees.html, is the front page presented before users. 

Thymeleaf helps us render the employee list to the view. We include it into the view.

	<html xmlns:th=http://www.thymeleaf.org>

The object returned by the servlet can be accessed with the expression below.

	${employees}

This is what the view’s html looks like.


SpringMVC project/employees.html
<!DOCTYPE HTML>
<html xmlns:th="http://www.thymeleaf.org">
<head> 
    <title>Spring Web App</title> 
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
    <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.7/dist/css/bootstrap.min.css" 
    	rel="stylesheet">
</head>
<body>
	<div class='container outer' style="width: 70%">
	    <h2>Employee Console</h2>
	    <br/>
	    
	    <div class="row">
	    		<table class="table table-hover">
		    		<thead>
				    <tr class="table-primary">
				    	<th scope="col">Employee ID</th>
				    	<th scope="col">First Name</th>
				    	<th scope="col">Last Name</th>
				    	<th scope="col">Salary</th>
				    	<th scope="col">Hire Date</th>
				    	<th scope="col">Manager ID</th>
				    	<th scope="col">Department ID</th>
				    </tr>
			    </thead>
			    <tbody>
				    <tr th:each="employee : ${employees}">
				        <td th:text="${employee.employee_id}">200</td>
				        <td th:text="${employee.first_name}">Susan</td>
				        <td th:text="${employee.last_name}">Grant</td>
				        <td th:text="${employee.salary}">6500</td>
				        <td th:text="${employee.hire_date}">2010/01/01</td>
				        <td th:text="${employee.manager_id}">100</td>
				        <td th:text="${employee.department_id}">10</td>
				    </tr>
			    </tbody>
		    </table>
	    </div>
	    <br/>
	    <div class="row">
	    		<div class="col"></div>
	    		<div class="col"></div>
	    		<div class="col"></div>
	    		<div class="col"></div>
	    		<div class="col"></div>
	    		<div class="col">
	    		<a href="/" class="btn btn-primary" type="button" 
	    			style="width:180px">Back</a>
	    		</div>
	    </div>
	</div>
</body>
</html>


Run the App


We still need an entrance to launch the view. Spring Boot looks for index.html under “static” folder at execution,  so we can embed a link in it. A very simple version can be like this.


index.html
<!DOCTYPE HTML>
<html>
<head> 
<title>Spring Web App</title> 
</head>
<body>
	<h2>Click the link below</h2>
	<br/>
	<div>
	<a href="/findallemps">Show Employees</a>
	</div>
</body>
</html>

After we got index.html in place, launch the web app from Eclipse. 

Enter “localhost:8080” in the browser.

When the index page shows up, click the link. You’ll be directed to the view.




Recap


As a classic pattern for web applications, MVC model has thrived in web app development since it was introduced. It still prevails in this domain. We didn’t delve into the mechanism behind the scene in the demo, instead, our focus was to build a lightweight web app by taking advantage of Spring Boot’s capability, hence we can get a glimpse of the simplicity and productivity that Spring Boot provides.



Reference


[Tips] Spring Boot – Access Data with JDBC



[Tips] Spring Boot - Build a Web Client Consuming REST APIs

Last updated on:

 

In this example, we’re going to build a web client to call the REST APIs built in the previous blog, [Tips] Spring Boot - Create REST APIs with WebFlux. We’ll call it API provider going forward. 

Below are the URIs:

  • http://localhost:8080/rest/findalldpts - get a collection of departments,

  • http://localhost:8080/rest/findbyid/{department_id} - get the department identified by department_id.

The example projects are available on GitHub.


Prerequisite


Spring Tool Suite is required for this example.

Assume Eclipse is the IDE for development. The tool suite is findable in Eclipse Marketplace.



Create a Spring Boot Project


Go to File menu, click New, select Other, choose Spring Starter Project under Spring Boot.


Check Spring Reactive Web on New Spring Starter Project Dependencies page.

Click Finish button.



This will add WebFlux dependency to the pom.xml file.


SpringWebClient project/pom.xml
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-webflux</artifactId>
	</dependency>



Data Class


Departments are stored in department POJOs in the app. Here is how the class defined.

SpringWebClient project/DptDto.java
public class DptDto {
	private Long department_id;
	private String department_name;
	private Integer manager_id;
	/*
	 * Constructors, Getters, Setters etc.
	 */
}


Configuration


As the API provider uses the default port number, this web client will take 9090 when it is launched.

The host hosting REST APIs is defined as a property, which is extracted by the program when creating the SpringWebClient bean. 


SpringWebClient project/application.properties
server.port=9090
rest.host=http://localhost:8080

Instead of hard-coding the API’s paths in the calling methods, we declare a few constants, which are concatenated to form the URIs.


SpringWebClient project/AppConstant.java
public final class AppConstant {	
	public static final String REST_ROOT = "/rest";
	public static final String API_DPT_FINDALL = "/findalldpts";
	public static final String API_DPT_FINDBYID = "/finddptbyid/";
}

WebClient is used to access the endpoint. 

We expect findAllDpts() to get us a collection of items, findById() to give us one item. In the meanwhile, we also need to take into account an exceptional case, that is, no item is found.

We use onStatus() to deal with this case. If NOT_FOUND status is received, throw an exception.

.onStatus(status -> status.equals(HttpStatus.NOT_FOUND), clientResponse -> 
	Mono.error(new Exception("Departments not found")))

Subsequently, at onErrorResume(), return an empty Flux or an empty Mono. As a consequence, you won’t get an error when you block for the content.

.onErrorResume(Exception.class, e -> {return Flux.empty();})

SpringWebClient project/SpringWebClient.java
@Component
public class SpringWebClient {
	private final WebClient webClient;

	public SpringWebClient(
			WebClient.Builder builder, 
			@Value("${rest.host}") String restHost) {
		this.webClient = builder.baseUrl(restHost).build();
	}

	public Flux<DptDto> findAllDpts() {	
		String uri = AppConstant.REST_ROOT + AppConstant.API_DPT_FINDALL;
		return this.webClient
				.get()
				.uri(uri)
				.accept(MediaType.APPLICATION_JSON)
				.retrieve()
				.onStatus(status -> status.equals(HttpStatus.NOT_FOUND), 
					clientResponse -> Mono.error(
						new Exception("Departments not found")))
				.bodyToFlux(DptDto.class)
				.onErrorResume(Exception.class, 
					e -> {return Flux.empty();});
	}
	
	public Mono<DptDto> findById(Long department_id) {
		String uri = AppConstant.REST_ROOT + AppConstant.API_DPT_FINDBYID 
			+ Long.toString(department_id);
		
		return this.webClient
				.get()
				.uri(uri)
				.accept(MediaType.APPLICATION_JSON)
				.retrieve()
				.onStatus(status -> status.equals(HttpStatus.NOT_FOUND), 
					clientResponse -> Mono.error(
						new Exception("Department not found")))
				.bodyToMono(DptDto.class)
				.onErrorResume(Exception.class, 
					e -> {return Mono.empty();});
	}
}


Get this to work


In the main() method, we use the application context to get the SpringWebClient bean. For a Mono object or a Flux object, we call block() to get the items received from the API provider.

You might have noticed, there is a slight difference between Mono and Flux when calling block() to get the items. For Flux, because a list of elements is expected, we call collectList() before block.


SpringWebClient project/SpringWebClientApplication.java
@SpringBootApplication
public class SpringWebClientApplication {

	public static void main(String[] args) {
		ConfigurableApplicationContext context 
			= SpringApplication.run(SpringWebClientApplication.class, args);
		
		SpringWebClient webClient = context.getBean(SpringWebClient.class);

		System.out.println(">> findById " + webClient.findById((long)30).block());
		
		List<DptDto> dptList = webClient.findAllDpts().collectList().block();
		dptList.forEach(dpt -> System.out.println(
			">> findAllDpts = " + dpt.toString()));
	}
}


Test the Apps


Launch the API provider first. When it is running, start this web client app that we just built. On the console, you'll see the output like this.


>> findById Department[department_id=30, department_name='Compliance', manager_id=300]
>> findAllDpts = Department[department_id=10, department_name='Administration', 
									manager_id=100]
>> findAllDpts = Department[department_id=20, department_name='Marketing', manager_id=200]
>> findAllDpts = Department[department_id=30, department_name='Compliance', manager_id=300]
>> findAllDpts = Department[department_id=40, department_name='Channel', manager_id=400]
>> findAllDpts = Department[department_id=99, department_name='Dummy', manager_id=0]


Recap


Non-blocking WebClient class is used to consume the REST APIs.

We also demonstrated the difference between handling Mono and Flux.

How to handle the case that no item is found was another point presented in the example.



Reference




[Tips] Spring Boot - Create REST APIs with WebFlux

Last updated on:

 

REST API is an often-seen buzzword; but building a REST API with Spring WebFlex is a new try for me. When I was reading WebFlux documentation, the keywords, non-blocking, reactive, caught my eyes. How to interpret them? 

First of all, asynchronous pattern came to my mind. In a React app, for example, when retrieve data from an external source, we use React.useEffect function to handle it. So, the app doesn’t get stuck and can continue other operations; once the data becomes available, get it rendered. On the server side of a servlet app, we use @Async annotation to define an asynchronous method. The purpose that asynchronous pattern serves is to process tasks separated from the main flow.

I googled what the difference is between non-blocking and asynchronous. It can be summed up like this.

How operations are carried out in a non-blocking pattern? When a request comes, an operation is initiated in response to it; but the thread doesn’t block there, instead,  immediately starts serving other requests. Once the original operation is completed, a notification is issued, which triggers the callback to continue to process the operation.

What purpose does it serve? Non-blocking pattern aims to utilize resources in a more efficient manner and scale up with flexibility. 

What scenario is it fit for? It is especially suitable for a highly concurrent environment.

What about reactive? It refers to a programming model built around events, or to make it simple, we can call it an event-driven model. Under this circumstance, we arrange operations by reacting to events, e.g., data readiness, operation completes, file I/O etc. 

This is what Spring WebFlux is built to address.

Mono and Flux, a couple of terms that we are going to meet in the example, are reactive types. They are like data types but different in some way: they emit not only items but also competes. Mono carries zero or one item, whereas, Flux encapsulates zero or many items.

The APIs we are going to build are identified by these routes.

  • “/rest/findalldpts” – Handler method: findAllDpts()

  • “/rest/finddptbyid/{department_id}” – Handler method: findById()

The source codes are available on GitHub



Prerequisite


Spring Tool Suite is required for this example. If you use Eclipse, you can go to Eclipse Marketplace to find and get it installed.

The Departments table is created beforehand in the database, Oracle XE.


SpringMVC project/schema.sql
CREATE TABLE Departments (
   Department_Id NUMBER(4)
 , Department_Name VARCHAR2(20)
 , Manager_Id NUMBER(6)
)
 NOLOGGING
 PARALLEL;

INSERT INTO Departments
WITH tabrows AS (
 SELECT 10, 'Administration', 100 FROM DUAL UNION ALL
 SELECT 20, 'Marketing', 200 FROM DUAL UNION ALL
 SELECT 30, 'Compliance', 300 FROM DUAL UNION ALL
 SELECT 40, 'Channel', 400 FROM DUAL UNION ALL
 SELECT 99, 'Dummy', null FROM DUAL
)
SELECT * FROM tabrows;


Create a Spring Boot Project


Click New button, choose Spring Starter Project, click Next.



Check Oracle Driver, Spring Data JDBC, Spring Reactive Web on the Dependencies page. 

Click Finish.



This is what the dependencies look like in pom.xml.


SpringREST project/pom.xml
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-webflux</artifactId>
	</dependency>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-data-jdbc</artifactId>
	</dependency>	
	<dependency>
		<groupId>com.oracle.database.jdbc</groupId>
		<artifactId>ojdbc11</artifactId>
		<scope>runtime</scope>
	</dependency>


Configure a Data Source


The connection strings for Oracle are defined in the application property file.


SpringREST project/application-dev.properties
spring.datasource.driver-class-name=oracle.jdbc.OracleDriver
spring.datasource.url=jdbc:oracle:thin:@localhost:1521/xepdb1
spring.datasource.username=user
spring.datasource.password=password

In this demo project, data source auto-configuration is disabled, as shown in the code block below. 


SpringREST project/SpringRestApplication.java
@SpringBootApplication(exclude = { DataSourceAutoConfiguration.class })
public class SpringRestApplication {
	public static void main(String[] args) {
		SpringApplication.run(SpringRestApplication.class, args);
	}
}

Therefore, we extract the connection strings and manually create a data source bean in the configuration file. 

Along with it, a JdbcTemplate bean and a TransactionManager bean are created as well. 


SpringREST project/PrimaryDataSourceConfig.java
@Configuration(proxyBeanMethods = false)
public class PrimaryDataSourceConfig {
			
	/*
	 * Primary data source
	 */
	@Primary
	@Bean
	public DataSource dataSource(
    		@Value("${spring.datasource.driver-class-name}") String dsDriverClassName,
    		@Value("${spring.datasource.url}") String dsURL,
    		@Value("${spring.datasource.username}") String dsUsername,
    		@Value("${spring.datasource.password}") String dsPassword
    		) {
		DriverManagerDataSource dataSource = new DriverManagerDataSource();
		dataSource.setDriverClassName(dsDriverClassName);
		dataSource.setUrl(dsURL);
		dataSource.setUsername(dsUsername);
		dataSource.setPassword(dsPassword);
        
		return dataSource;
	}
	
	/*
	 * JdbcTemplate for primary data source
	 */
	@Primary
	@Bean
	public JdbcTemplate jdbcTemplate(
			DataSource dataSource) {
		return new JdbcTemplate(dataSource);
	}

	@Primary
	@Bean(name = "transactionManager")
	public PlatformTransactionManager transactionManager(
    		DataSource dataSource) {
		return new DataSourceTransactionManager(dataSource);
	}
}


Data Class


After the data was fetched from the database, it is stored in Department POJOs which are passed to an API consumer across the app’s logical layers.  

The class is defined in the following code.


SpringREST project/DptDto.java
public class DptDto {
	private Long department_id;
	private String department_name;
	private Integer manager_id;
	/*
	 * Construtors, Getters, Setters etc.
	 */
}


Configure a Router and Handlers


This part is the implementation of the APIs. Web requests are processed by handler methods . To map the routes to the handlers, we define a router bean, shown in the follow code block. 

Handler findAllDpts() will return either a bunch of items or none.

Handler findById() will return either one item or none. Even there are more than one records sharing the same department_id, we get back the first element to API consumers. With an intention to differ it from findAllDpts(), we’d like to see how the client will deal with them. 

In case that items are not found, a ServerResponse object with not found status is returned.


SpringREST project/DptRouter.java
@Configuration(proxyBeanMethods = false)
public class DptRouter {
	
	@Bean
	public RouterFunction<ServerResponse> route() {

	    return RouterFunctions
	      .route(GET(AppConstant.REST_ROOT+AppConstant.API_DPT_FINDALL)
	      	.and(accept(MediaType.APPLICATION_JSON)), 
	      	this::findAllDpts)
	      .andRoute(GET(AppConstant.REST_ROOT+AppConstant.API_DPT_FINDBYID), 
	      	this::findById);
	  }
	
	@Autowired
	private DptService dptService;
	
	public Mono<ServerResponse> findAllDpts(ServerRequest request) {
		
		List<DptDto> listDpt = dptService.findAll();	
		if (!ObjectUtils.isEmpty(listDpt)) {
			return ServerResponse.ok()
					.contentType(MediaType.APPLICATION_JSON)
					.body(BodyInserters.fromValue(listDpt));
		} else {
			return ServerResponse.notFound()
					.build();
		}
	}

	public Mono<ServerResponse> findById(ServerRequest request){
		Map<String, String> pathVariables = request.pathVariables();
		Long department_id = Long.parseLong(pathVariables.get("department_id")); 

		List<DptDto> listDpt = dptService.findById(department_id);
		
		if (!ObjectUtils.isEmpty(listDpt)) {

			return ServerResponse.ok()
					.contentType(MediaType.APPLICATION_JSON)
					.body(Mono.just(listDpt.getFirst()), DptDto.class);
		} else {
			return ServerResponse.notFound()
					.build();
		}
	}
}

As an effort to avoid using literal strings in the methods, we use constants to form the routes. The constants are defined as follows. 


SpringREST project/AppConstant.java
public final class AppConstant {
	public static final String REST_ROOT = "/rest";
	public static final String API_DPT_FINDALL = "/findalldpts";
	public static final String API_DPT_FINDBYID = "/finddptbyid/{department_id}";
}


Data Persistence


This is a common example using Spring JDBC: at service layer, inject a DAO bean, call its methods to get data from the tables.

For more details about data access with Spring JDBC, refer to [Tips] Spring Boot – Access Data with JDBC.


Service definition:

SpringREST project/DptService.java
@Service
public class DptService{

	@Autowired
	DptDao dptDao;
	
	public List<DptDto> findAll() {
		List<DptDto> dptlist = dptDao.findAll();		
		return dptlist;
	}
	
	public List<DptDto> findById(Long department_id){
		return dptDao.findById(department_id);
	}


DAO definition: 

SpringREST project/DptDaoImpl.java
@Repository
public class DptDaoImpl implements DptDao {

	@Autowired
	JdbcTemplate jdbcTemplate;

	@Override
	public List<DptDto> findAll(){	
		String sql = "SELECT department_id, department_name, manager_id "
			+ "FROM departments "
			+ "ORDER BY department_id";
	    	
		List<DptDto> dptlist = null;
		try {
			dptlist = jdbcTemplate.query(sql,
				(rs, rowNum) -> new DptDto(rs.getLong("department_id"), 
					rs.getString("department_name"), 
					rs.getInt("manager_id"))
			);
		} catch (Exception e) {
			System.out.println(e);
		}		
		return dptlist;
	}

	public List<DptDto> findById(Long department_id){

		String sql = "SELECT department_id, department_name, manager_id "
			+ "FROM departments "
			+ "WHERE department_id = ?";
    	
		List<DptDto> dptlist = null;
		try {
			dptlist = jdbcTemplate.query(sql, new Object[] {department_id}, 
				new int[] {Types.INTEGER}, new RowMapper<DptDto>() {
				@Override
				public DptDto mapRow(ResultSet rs, int rowNum) 
					throws SQLException {
					DptDto dptdto = new DptDto();
					dptdto.setDepartment_id(
						rs.getLong("department_id"));
					dptdto.setDepartment_name(
						rs.getString("department_name"));
					dptdto.setManager_id(
						rs.getInt("manager_id"));
					return dptdto;
				}
			});        	
		} catch (Exception e) {
			System.out.println(e);
		}
        
		return dptlist;
	}
}	


Test the APIs


Click Run button in Eclipse to get the app running. 

Open a browser, enter the URIs. You’ll see the responses from the endpoint. 

This is what the results look like. 


Result of “/rest/findalldpts”


Result of “/rest/finddptbyid/30”



Recap


We touched on Spring WebFlux, which features in non-blocking and reactive. 

Using it, we also built APIs that return a Mono and a Flux, respectively. How does the client handle them differently? Refer to [Tips] Spring Boot - Build a Web Client Consuming REST APIs

Additionally, no item being found is a case we need to consider when design a server-side app.



Reference


Building a Reactive RESTful Web Service 

Spring WebFlux 


Tuesday, August 5, 2025

[Tips] Spring Boot - Build a Spring Batch

Last updated on:

 

It is seldom that an enterprise app runs without batches, especially processing data between interfaces with other apps. Easily we think of ETL tools. True, they are always an option coming up on the table. At the same time, Spring batch appears to be another good choice: it leverages the features of Spring Framework such as logging, tracing, transaction management, resource management, AOP etc.; it is lightweight while supporting robust batch apps; it enables daily complex and demanding business operations. 

We’ll open up the veil by walking through an example which imports a csv file into tables in the database, as illustrated in the below diagram.




Passwords are encrypted and then loaded into the Users table. Roles are transformed by inserting “ROLE_” before them.

Here’s what the work flow looks like. 



The source codes are available on GitHub.



Prerequisite


Make sure Spring Tool Suite is installed, and the table are created in the database as well. By the way, “id” column of the Users table gets the value populated by sequence “users_seq”.


SpringMVC project/schema.sql
create table users (
  id number
 ,username varchar2(64) unique
 ,password varchar2(255) not null
 ,enabled number(1) not null
);

create table authorities (
  username varchar(64) not null
 ,authority varchar(64) not null
);

create sequence users_seq
 start with 1
 increment by 1
 nocache
 order
 nocycle;


One more note, a Spring Batch logs execution status to several batch-dedicated tables, which need to be created beforehand. The DDL file, for example, oracle_ddl.sql, is located in the  org.springframework.batch.core jar file under Maven Dependencies. 



Create a Spring Boot Project


Go to File menu, click New, select Other, choose Spring Starter Project under Spring Boot.


Spring JDBC along with Oracle will be used for data persistence. Select them on the Dependencies tab. Surely, you can manually add them to pom.xml later, as shown in the Dependencies section.




Dependencies


We’re going to truncate the tables before import through JDBC and load the data to Oracle. So, their dependencies are added into the pom.xml file, not to mention, Spring Boot Batch. 


SpringBatch project/pom.xml
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-batch</artifactId>
	</dependency>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-data-jdbc</artifactId>
	</dependency>
	<dependency>
		<groupId>com.oracle.database.jdbc</groupId>
		<artifactId>ojdbc11</artifactId>
		<scope>runtime</scope>
	</dependency>


Property Setting


We assume the file could be on any directory in the server, so we define a couple of properties in the application property file: use “import.path” to indicate the directory; use “file.users” to specify the file name. 


SpringBatch project/application.properties
spring.profiles.active=dev
import.path=xxx/shared/
file.users=users.csv

To connect to the database, we provide the connection strings and use the default data source that is configured by Spring Boot.


SpringBatch project/application-dev.properties
spring.datasource.driver-class-name=oracle.jdbc.OracleDriver
spring.datasource.url=jdbc:oracle:thin:@localhost:1521/xepdb1
spring.datasource.username=user
spring.datasource.password=password


Configure the job


The job takes two steps:

truncateStep – implements a tasklet truncating the tables beforehand;
importFileStep – implements file reading, data transforming and data writing tasks.

They are the implementation of the preceding job flow.


SpringBatch project/FileToDbBatchConfig.java
@Configuration
@EnableBatchProcessing
public class FileToDbBatchConfig {
	public static final String JOB_NAME = "FileToDbJob";
	
	@Autowired
	public JobRepository jobRepository;
	
	@Autowired
	public PlatformTransactionManager transactionManager;
	
	/*
	 * Job flow
	 */
	@Bean
	public Job ftdImportUserJob() {
		return new JobBuilder(JOB_NAME, jobRepository)
				.incrementer(new RunIdIncrementer())
				.validator(ftdJobParamValidator())
				.start(truncateStep())
				.next(importFileStep())
				.build();
	}

	@Bean
	public JobParametersValidator ftdJobParamValidator() {
		String[] requiredKeys = new String[]{"filePath"};
		String[] optionalKeys = new String[]{"executedTime"};
		
		return new DefaultJobParametersValidator(requiredKeys, optionalKeys);
	}
	
	/*
	 * truncteStep: run a tasklet truncating users and authorities tables
	 */
	@Autowired
	MethodInvokingTaskletAdapter ftdTruncateStepTasklet;
	
	@Bean
	public Step truncateStep() {
		return new StepBuilder("truncateStep", jobRepository)
				.tasklet(ftdTruncateStepTasklet, transactionManager)
				.build();
	}
	
	/*
	 * importFileStep: read the csv file and write to the database
	 */
	@Autowired
	FlatFileItemReader<User> ftdImportFileStepReader;
	
	@Autowired
	ItemProcessor<User, User> ftdImportFileStepProcessor;
	
	@Autowired
	JdbcBatchItemWriter<User> ftdImportFileStepWriter;
	
	@Bean
	public Step importFileStep() {
		return new StepBuilder("importFileStep", jobRepository)
				.<User, User>chunk(10, transactionManager)
				.reader(ftdImportFileStepReader)
				.processor(ftdImportFileStepProcessor)
				.writer(ftdImportFileStepWriter)
				.build();
	}
}


Truncate table step

In this step, the tasklet calls UsersDao’s truncateUsers() method to delete all data from the tables.


SpringBatch project/FileToDbBatchTasklet.java
@Configuration
public class FileToDbBatchTasklet {
	
	@Autowired
	private UsersDao usersDao;
	
	@Bean
	public MethodInvokingTaskletAdapter ftdTruncateStepTasklet() {
		MethodInvokingTaskletAdapter adapter = new MethodInvokingTaskletAdapter();
		adapter.setTargetObject(usersDao);
		adapter.setTargetMethod("truncateUsers");
		
		return adapter;
	}

}

SpringBatch project/UsersDaoImpl.java
@Repository
public class UsersDaoImpl implements UsersDao{
	
	@Autowired
	JdbcTemplate jdbcTemplate;
	
	public ExitStatus truncateUsers() {
		String truncUsers = "truncate table users";
		String truncAuthorities = "truncate table authorities";
		
		jdbcTemplate.execute(truncUsers);
		jdbcTemplate.execute(truncAuthorities);
		
		return ExitStatus.COMPLETED;
	}
}


Import file step

This step implements a reader, a processor and a writer, which realize file reading, data transformation and data writing functions in the work flow, respectively. 

The reader reads records from the csv file and save to User objects.


SpringBatch project/FileToDbBatchReader.java
@Configuration
public class FileToDbBatchReader {

	@Bean
	@StepScope
	@Value("#{jobParameters['filePath']}")  // to get job parameter
	public FlatFileItemReader<User> ftdImportFileStepReader(String filePath) {
		return new FlatFileItemReaderBuilder<User>()
			.name("UserItemReader")
			.resource(new FileSystemResource(filePath))
			.delimited()
			.names(new String[]{"username", "password", "role", "enabled"})
			.targetType(User.class)
			.build();
	}
}

SpringBatch project/User.java
public class User {
	private String username;
	private String password;
	private String role;
	private Integer enabled;
	/*
	 * Constructors, Getters, Setters etc.
	 */
}

The processor encrypts the passwords and inserts “ROLE_” before each role name. 


SpringBatch project/FileToDbBatchProcessor.java
@Configuration
public class FileToDbBatchProcessor {
	@Bean
	@StepScope
	public ItemProcessor<User, User> ftdImportFileStepProcessor() {
		return item -> {
		    	 /*
		    	  * Transforming data: encrypt the password
		    	  */
			item.setUsername(item.getUsername());
			item.setPassword(
					PasswordEncoderFactories
					.createDelegatingPasswordEncoder()
					.encode(item.getPassword()));
			item.setRole("ROLE_"+item.getRole());
			item.setEnabled(item.getEnabled());	
			
			return item;
		};
	}	
}

The writer loads the transformed data to the tables, users and authorities.


SpringBatch project/FileToDbBatchWriter.java
@Configuration
@EnableBatchProcessing(dataSourceRef = "dataSource")
public class FileToDbBatchWriter {
		
	@Autowired
	DataSource dataSource;

	@Bean
	@StepScope
	public JdbcBatchItemWriter<User> ftdImportFileStepWriter() {
		
		String sql = "INSERT ALL "
			+ "INTO users(id, username, password, enabled) "
			+ "VALUES (users_seq.nextval, :username, :password, :enabled) "
			+ "INTO authorities(username, authority) "
			+ "VALUES (:username, :role) "
			+ "SELECT * FROM DUAL";
		
		return new JdbcBatchItemWriterBuilder<User>()
			.sql(sql)
			.dataSource(dataSource)
			.itemSqlParameterSourceProvider(
				new BeanPropertyItemSqlParameterSourceProvider<>())
			.build();
	}
}


Run the batch


Configure a CommandLineRunner bean in the batch application to launch the file import job.


SpringBatch project/SpringBatchApplication.java
@SpringBootApplication
public class SpringBatchApplication {

	public static void main(String[] args) {
		SpringApplication.run(SpringBatchApplication.class, args);
	}

	@Bean
	public CommandLineRunner runImportFileJob(
			JobLauncher jobLauncher, 
			Job importUserJob,
			@Value("${import.path}") String importPath,
			@Value("${file.users}") String fileUsers) {
		
	    return args -> {
	        JobParameters jobParameters = new JobParametersBuilder()
	        		.addString("filePath", importPath + fileUsers)
	        		.addLong("executedTime", System.currentTimeMillis())
	        		.toJobParameters();
	        jobLauncher.run(importUserJob, jobParameters);
	    };
	}

Click Run button in Eclipse. 

After the batch finished, you can view the inserted records in the tables, as shown in the following screenshot.




Recap


A job consists of steps. A step typically is made up of a reader and a processor and a writer, or a tasklet. A tasklet is suitable for a specific, separate task like cleaning up a table. For a Spring batch is able to access all Spring features, it can implement complicated business logics and easily adapt to daily operation, monitor, enhancement in an enterprise environment. Moreover, no extra infrastructure is required to host it. 



Reference


Spring Batch 


Monday, August 4, 2025

[Tips] Spring Boot – Access Data with Mybatis

Last updated on:

 

When talking about data persistence in Spring, we can’t overlook Mybatis, which is widely used in enterprise apps. It eases troubles of writing and testing data access and process codes. What will we need to do in order to integrate Mybatis into a Spring Boot app? Simply put, 

  • Tell Spring Boot where the mappers are (@MapperScan);
  • Declare interfaces in the Java mapper file (@Mapper);
  • Define SQL statements in the XML file corresponding to the interfaces, or you can put down those SQLs in the Java mapper file if you like.

We’ll walk through a few examples to see how Mybatis works in some scenarios. In the first example, we’ll use Mybatis to retrieve the list of all departments from the database.

The source codes are available in the sample projects on GitHub



Prerequisite


As we start this exercise, a few things need to be prepared beforehand: 
  • Spring Tool Suite is installed in the IDE; 
  • the tables, Departments and Employees, are created in the database; 
  • and the corresponding data classes are defined.

For details, refer to [Tips] Spring Boot – Access Data with JDBC.

When you new a project for practice, choose Spring Starter Project.




Dependency


To enable Mybatis, we’ll need to include Mybatis dependency in the Spring Boot app’s pom.xml file.


SpringMVC project/pom.xml
	<dependency>
		<groupId>org.mybatis.spring.boot</groupId>
		<artifactId>mybatis-spring-boot-starter</artifactId>
		<version>3.0.3</version>
	</dependency> 


In case of using a default data source


If you prefer Spring Boot to configure a data source as the default one for the app, all you need to do is to write down the connection strings in the application property file, as shown in the example below. 

Note that the keys’ names prefix with “spring.datasource”.


SpringMVC project/application.properties
spring.datasource.driver-class-name=oracle.jdbc.OracleDriver
spring.datasource.url=jdbc:oracle:thin:@localhost:1521/xepdb1
spring.datasource.username=dbuser
spring.datasource.password=dbpassword

In the start class of the app, declare the directory of the mappers by using @MapperScan.


SpringMVC project/SpringMvcApplication.java
@SpringBootApplication
@MapperScan("com.example._50_dao.mapper")
public class SpringMvcApplication {
	static final Logger log = LoggerFactory.getLogger(SpringMvcApplication.class);
	
	public static void main(String[] args) {
		SpringApplication.run(SpringMvcApplication.class, args);
	}
}

Make sure to place the Java mapper file under src/main/java, and to place the XML mapper file under src/main/resources, respectively. They should share the same sub-directory structure as well.



SpringMVC project/DepartmentMapper.java
@Mapper
public interface DepartmentMapper {
	List<DptDto> findAll();
	…
}

SpringMVC project/DepartmentMapper.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN"
        "http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="com.example._50_dao.mapper.DepartmentMapper">

    <select id="findAll" resultType="com.example._60_dto.DptDto">
        SELECT department_id, department_name, manager_id 
        FROM departments
        ORDER BY department_id
    </select>
    …
</mapper>

At the DptService bean, autowire a mapper to fetch all the departments from the database.


SpringMVC project/DptService.java
@Service
public class DptService{
	@Autowired
	DepartmentMapper dptMapper;	
	
	@Transactional
	public Lis<DptDto> findAll() {
		return dptMapper.findAll();
	}
}


Run the app


Following the recipe step by step, we got Mybatis ready for use now. System standard I/O is handy, you can use it to check the result. Here we show the departments on a web view.




How to handle joins?


For example, in case we’d like to get a dataset containing all the columns of Employees and Departments, how can we get this done? 

Of course, we join the tables in the SQL script, in other words, we let the database to complete the join operation, rather than write Java codes to combine the datasets of Employees and Departments; therefore, we just need to handle the joined dataset provided by the database, much easier.

We start with defining a data class named EmpDptDto which has all the fields of the join tables. 


SpringMVC project/EmpDptDto.java
public class EmpDptDto {
	private Integer employee_id;
	private String first_name;
	private String last_name;
	private Double salary;
	private String hire_date;
	private Integer manager_id;
	private Long department_id;
	private String department_name;
	/*
	 * Constructors, Getters, Setters etc.
 	 */
}

Declare a method in the Java mapper interface, which returns a list of EmpDptDto objects.


SpringMVC project/EmployeeMapper.java
@Mapper
public interface EmployeeMapper {
    List<EmpDptDto> empLeftJoinDpt();
}

In the XML mapper file, define a <select> element, with id set as “empLeftJoinDpt”, resultType set as the preceding data class.


SpringMVC project/EmployeeMapper.xml
  <select id="empLeftJoinDpt" resultType="com.example._60_dto.EmpDptDto">
      SELECT e.employee_id, e.first_name, e.last_name, e.salary, e.hire_date, e.manager_id, 
      d.department_id, d.department_name
      FROM employees e LEFT JOIN departments d
      ON e.department_id = d.department_id
  </select"> 

That’s it. The result comes out like this if we print out that list:


[Employee-Department[employee_id='100', first_name='David', last_name='OConnell', 
salary='4800.000000', hire_date='2010/01/01', manager_id='null', department_id=10, 
department_name=Administration], 
…
Employee-Department[employee_id='202', first_name='Pat', last_name='Fay', 
salary='6000.000000', hire_date='2010/12/31', manager_id='200', department_id=20, 
department_name=Marketing]]


What if accessing the secondary data source?


This is also a common scenario. For example, we’d like to get the full list of employees from the secondary data source. How to do this? The point is that Mybatis’s mappers are bound to a data source. 


Update the secondary data source’s configuration

In the configuration file of the secondary data source, 

  • Specify the mapper location (@MapperScan);
  • Define a SqlSessionBean.

SpringMVC project/SecondaryDataSourceConfig.java
@Configuration(proxyBeanMethods = false)
@MapperScan(
		  basePackages = "com.example._50_dao.sndmapper"
		)
public class SecondaryDataSourceConfig {
			
	/*
	 * Secondary data source
	 */
	@Qualifier("secondary")
	@Bean(defaultCandidate = false)
	public DataSource sndDataSource(
    		@Value("${secondary.datasource.driver-class-name}") String dsDriverClassName,	
    		@Value("${secondary.datasource.url}") String dsURL,	
    		@Value("${secondary.datasource.username}") String dsUsername,	
    		@Value("${secondary.datasource.password}") String dsPassword
    		) {
		  DriverManagerDataSource dataSource = new DriverManagerDataSource();
		  dataSource.setDriverClassName(dsDriverClassName);
		  dataSource.setUrl(dsURL);
		  dataSource.setUsername(dsUsername);
		  dataSource.setPassword(dsPassword);
        
		  return dataSource;
	}
	…
	/*
	 * SqlSessionFactoryBean for Mybatis
	 */	
	@Qualifier("secondary")
	@Bean(defaultCandidate = false)
	public SqlSessionFactory sndSqlSessionFactory(
			 @Qualifier("secondary") DataSource dataSource) throws Exception {
		  SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean();
		  sessionFactory.setDataSource(dataSource);
		  return sessionFactory.getObject();
	}

	@Qualifier("secondary")
	@Bean(defaultCandidate = false)
	public SqlSessionTemplate sndSqlSessionTemplate(
    		@Qualifier("secondary") SqlSessionFactory sqlSessionFactory) {
         
		return new SqlSessionTemplate(sqlSessionFactory);
	}
} 


Add Java and XML mapper files

The files are located in the directories specified in the data source’s configuration file, as shown in the below screenshot. In this example, we copied the mapper files for the primary data source, with different names. The namespace in the XML mapper file was updated as well.


SpringMVC project/SndEmployeeMapper.xml
<mapper namespace="com.example._50_dao.sndmapper.SndEmployeeMapper">
  
SpringMVC project/EmployeeMapper.xml
<mapper namespace="com.example._50_dao.mapper.EmployeeMapper"">



Call the mapper’s method

At the EmpService bean, autowire a SndEmployeeMapper bean and call the findAll() method.


SpringMVC project/EmpService.java
@Service
public class EmpService{
	@Autowired
	SndEmployeeMapper empMapper;
	
	public List<EmpDto> findAll() {
		return empMapper.findAll();
	}
}


Check out the result

Launch the SpingMVC sample app from Eclipse. 

Click the Employees button.

The result is shown in the screenshot below.




[Tips] Spring Boot - Access Data with JDBC

Last updated on:


Spring JDBC is intuitive: you issue a query, and you get a dataset; when you extract the data from the ResultSet and save it to your POJO object, you can transform it at that point. If the database is configured to have a good performance on processing SQL statements, you can come up with graceful SQLs to shift more work to the database, as a result, you can simplify the operations in your programs.   

We’ll walk through the following examples to see how Spring Boot supports it.

The source codes are available in the sample projects on GitHub.



Prerequisite


First, get the Spring Tool Suite installed in your Eclipse. You can find it in Eclipse Marketplace. If you use Tanzu’s Spring Tool, you can skip this because the IDE comes with the Tool Suite.



After Spring Tool Suite was installed, you can see Spring Boot showing up when you create a project, and, choose Spring Starter Project for practice.

Moreover, you’ll need to create the following tables in the database. In the examples, Oracle XE is the database being used.


SpringMVC project/schema.sql
CREATE TABLE Departments (
   Department_Id NUMBER(4)
 , Department_Name VARCHAR2(20)
 , Manager_Id NUMBER(6)
)
 NOLOGGING
 PARALLEL;

CREATE TABLE Employees (
   Employee_Id NUMBER(6)
 , First_Name VARCHAR2(20)
 , Last_Name VARCHAR2(20)
 , Salary NUMBER(8)
 , Hire_Date CHAR(10)
 , Manager_Id NUMBER(6)
 , Department_Id NUMBER(4)
)
 NOLOGGING
 PARALLEL;


At application level, data classes are created accordingly. Refer to Departments.java and Employees.java in SpringMVC project, respectively. 



Dependencies


You’ll need to add JDBC and Oracle dependencies to the pom.xml file, as shown in the below example.


SpringMVC project/pom.xml
  	<dependency>
		<groupid>org.springframework.boot</groupid>
		<artifactid>spring-boot-starter-jdbc</artifactid>
	</dependency>		
	<dependency>
		<groupid>com.oracle.database.jdbc</groupid>
		<artifactid>ojdbc11</artifactid>
		<scope>runtime</scope>
	</dependency>

However, when you create your Spring Starter project, if you check Oracle Driver and Spring Data JDBC options on the Project Dependencies page, you will get them in pom.xml automatically.


Project Dependencies Page


Configuration


This is a beautiful part of Spring Boot. As long as we write down the connection fields in the application property file, Spring Boot will auto-configure the data source. 

Note that the names of the keys should start with “spring.datasource”.


SpringMVC project/application.properties
spring.config.activate.on-profile=dev
spring.datasource.driver-class-name=oracle.jdbc.OracleDriver
spring.datasource.url=jdbc:oracle:thin:@localhost:1521/xepdb1
spring.datasource.username=user
spring.datasource.password=password


Data Classes


The data classes, in association with the tables, are defined as follows. They are POJOs, neither implementing any super interface, nor extending any super class.


SpringMVC project/DptDto.java
public class DptDto {
	private Long department_id;
	private String department_name;
	private Integer manager_id;
	
	/*
	 * Constructors, Getters, Setters etc.
	 */
	…
}

SpringMVC project/EmpDto.java
public class EmpDto {
	private Integer employee_id;
	private String first_name;
	private String last_name;
	private Double salary;
	private String hire_date;	
	private Integer manager_id;
	private Long department_id;
	/*
	 * Constructors, Getters, Setters etc.
	 */
	…
}


DAO


As Spring Boot takes care of the configuration, we can simply autowire a JdbcTemplate bean in our DAO bean. Below is the example of getting all the records from the Departments table.


SpringMVC project/DptDaoImpl.java
  @Repository
  public class DptDaoImpl implements DptDao {

      @Autowired
      JdbcTemplate jdbcTemplate;

      public List<DptDto> getAllDpts(){
          String sql = "SELECT department_id, department_name, manager_id "
          	+ "FROM departments "
          	+ "ORDER BY department_id";

          List<DptDto>> dptlist = null;
          try {
              dptlist = jdbcTemplate.query(sql,
				(rs, rowNum) -> new DptDto(rs.getLong("department_id"), 
				rs.getString("department_name"), 
				rs.getInt("manager_id"))
              );
          } catch (Exception e) {
              System.out.println(e);
          }

          return dptlist;
      }
      …
  }


Test the app


So, we can run the DAO to retrieve data from the database now. To get a better visual effect, we display the list of departments on a web page.




Add another data source


It is not a rare case to handle multiple data sources in an enterprise application. How to do it? Firstly, add the connection strings into the application property file.


secondary.datasource.driver-class-name=oracle.jdbc.OracleDriver
secondary.datasource.url=jdbc:oracle:thin:@localhost:1521/xepdb1
secondary.datasource.username=user
secondary.datasource.password=password

Secondly, declare a data source bean in a configuration class, in case of Java configuration;  and define a JdbcTemplate bean associated with the preceding data source. 

Don’t forget to add a qualifier to these beans because we need to distinguish them from the default ones. This is important.


SpringMVC project/SecondaryDataSourceConfig.java
@Configuration(proxyBeanMethods = false)
public class SecondaryDataSourceConfig {
			
	/*
	 * Secondary data source
	 */
	@Qualifier("secondary")
	@Bean(defaultCandidate = false)
	public DataSource sndDataSource(
    		@Value("${secondary.datasource.driver-class-name}") String dsDriverClassName,	
    		@Value("${secondary.datasource.url}") String dsURL,	
    		@Value("${secondary.datasource.username}") String dsUsername,	
    		@Value("${secondary.datasource.password}") String dsPassword
    		) {
		DriverManagerDataSource dataSource = new DriverManagerDataSource();
		dataSource.setDriverClassName(dsDriverClassName);
		dataSource.setUrl(dsURL);
		dataSource.setUsername(dsUsername);
		dataSource.setPassword(dsPassword);
        
		return dataSource;
	}

	/*
	 * JdbcTemplate for the secondary data source
	 */
	@Qualifier("secondary")
	@Bean(defaultCandidate = false)
	public JdbcTemplate sndJdbcTemplate(
			@Qualifier("secondary") DataSource dataSource) {
		return new JdbcTemplate(dataSource);
	}
}

This also can be done with less codes, as shown in the following example.


SpringREST project/SecondaryDataSourceConfig.java
@Configuration(proxyBeanMethods = false)
public class SecondaryDataSourceConfig {
	
	@Profile("dev")
	@Qualifier("secondary")
	@Bean(defaultCandidate = false)
	@ConfigurationProperties("secondary.datasource")
	public DataSource sndDataSource() {
		DriverManagerDataSource dataSource = new DriverManagerDataSource();
		
		return dataSource;
	}
	
	@Profile("dev")
	@Qualifier("secondary")
	@Bean(defaultCandidate = false)
	public JdbcTemplate sndJdbcTemplate(
			@Qualifier("secondary") DataSource dataSource) {
		
		return new JdbcTemplate(dataSource);
	}
	…
}

Thirdly, just autowire them in your DAO bean, with the qualifier specified.


SpringMVC project/EmpDaoImpl.java
@Repository
public class EmpDaoImpl implements EmpDao {
	
	@Qualifier("secondary")
	@Autowired
	JdbcTemplate jdbcTemplate;
	
	public List<EmpDto> findAll(){
		String sql = "SELECT employee_id, first_name, last_name, salary, "
        		+ "hire_date, manager_id, department_id "
        		+ "FROM employees "
        		+ "ORDER BY employee_id";
        
		return jdbcTemplate.query(sql, new RowMapper<EmpDto>() {
			@Override
			public EmpDto mapRow(ResultSet rs, int rowNum) throws SQLException {
            			EmpDto empDto = new EmpDto(
            				rs.getInt("employee_id"),
            				rs.getString("first_name"),
            				rs.getString("last_name"),
            				rs.getDouble("salary"),
            				rs.getString("hire_date"),
            				rs.getInt("manager_id"),
            				rs.getLong("department_id"));

            			return empDto;
			}
		});	
	};
}

The list of employees, retrieved from the secondary database, are shown in the following screenshot.




Take over the configuration


Due to some reason, in case of explicitly configuring the primary data source, we can do it by disabling the auto-configuration with this instruction.

@SpringBootApplication(exclude = { DataSourceAutoConfiguration.class })

SpringREST project/PrimaryDataSourceConfig.java
@SpringBootApplication(exclude = { DataSourceAutoConfiguration.class })
public class SpringRestApplication {
	…
	public static void main(String[] args) {
		SpringApplication.run(SpringRestApplication.class, args);
	}
}

Then, create a configuration class where to define data source related beans, in the same way as we did for the secondary data source in Section “Add another data source”.  This time we make them primary. 


SpringREST project/PrimaryDataSourceConfig.java
@Configuration(proxyBeanMethods = false)
public class PrimaryDataSourceConfig {
			
	/*
	 * Primary data source
	 */
	@Primary
	@Bean
	public DataSource dataSource(
    		@Value("${spring.datasource.driver-class-name}") String dsDriverClassName,
    		@Value("${spring.datasource.url}") String dsURL,
    		@Value("${spring.datasource.username}") String dsUsername,
    		@Value("${spring.datasource.password}") String dsPassword
    		) {
		DriverManagerDataSource dataSource = new DriverManagerDataSource();
		dataSource.setDriverClassName(dsDriverClassName);
		dataSource.setUrl(dsURL);
		dataSource.setUsername(dsUsername);
		dataSource.setPassword(dsPassword);
        
		return dataSource;
	}
	
	/*
	 * JdbcTemplate for primary data source
	 */
	@Primary
	@Bean
	public JdbcTemplate jdbcTemplate(
			DataSource dataSource) {
		return new JdbcTemplate(dataSource);
	}
	…
}

[Tips] Spring Boot - Check Out Spring MVC through Implementing a Simple Function

Last updated on:   When it comes to Spring MVC, the well-known MVC architecture diagram comes to our mind. It helps us underst...