My code is very similar to one below, despite configuring the transaction manager, except for the incorrect item all items are inserted into the db. This is absurd as either there should be all insert or none using @Transactional.
List<Book> books = new ArrayList();
for (int count = 0; count < size; count++) {
if (count == 500) {
// Create an invalid data for id 500, test rollback
// Name max 255, this book has length of 300
books.add(new Book(NameGenerator.randomName(300), new BigDecimal(1.99)));
continue;
}
books.add(new Book(NameGenerator.randomName(20), new BigDecimal(1.99)));
}
@Transactional
public int[][] batchInsert(List<Book> books, int batchSize) {
int[][] updateCounts = jdbcTemplate.batchUpdate(
"insert into books (name, price) values(?,?)",
books,
batchSize,
new ParameterizedPreparedStatementSetter<Book>() {
public void setValues(PreparedStatement ps, Book argument) throws SQLException {
ps.setString(1, argument.getName());
ps.setBigDecimal(2, argument.getPrice());
}
});
return updateCounts;
}
This is a difficult situation I facing for batch update. The actual solution lies in using prepared statement and not JdbcTemplate(Actually under the hood jdbcTemplate uses prepared statement). The scenario was to insert 5000 records in db in distrubuted application with one batch(JSON Payload) of 1000 records.
On the connection object one needs to turn off auto commit and then commit the transaction if all the insert happens successfully. The scenario is tried and tested in Spring boot.
Using prepared statement actually gives the control over the transaction and has far better performance than using JdbcTemaplte.
Also explained in the article https://www.vertica.com/docs/9.2.x/HTML/Content/Authoring/ConnectingToVertica/ClientJDBC/BatchInsertsUsingJDBCPreparedStatements.htm