Hibernate hbm2ddl won’t create schema before creating tables

Situation I have a local H2 in memory database for integration tests and an Oracle db for production. I do not control the Oracle DB model. The in memory H2 database is created automatically by adding <prop key=”hibernate.hbm2ddl.auto”>update&l…Situation I have a local H2 in memory database for integration tests and an Oracle db for production. I do not control the Oracle DB model. The in memory H2 database is created automatically by adding <prop key=”hibernate.hbm2ddl.auto”>update&l…

Situation I have a local H2 in memory database for integration tests and an Oracle db for production. I do not control the Oracle DB model. The in memory H2 database is created automatically by adding

update

  to hibernate properties in AnnotationSessionFactoryBean. The definition of the entity stored in DB points to a schema

@Entity
@Table(name = "business_operations", schema = "sowa")
public class BusinessOperation {
...

The problem When creating the H2 database, Hibernate won’t create the schema before creating tables. As a result it will show errors when trying to create the tables in non existing schema and fail in any query (queries will be run with sowa.business_operations).

2011-01-18 15:13:30,884 INFO [org.hibernate.tool.hbm2ddl.SchemaUpdate] - Running hbm2ddl schema update
2011-01-18 15:13:30,885 INFO [org.hibernate.tool.hbm2ddl.SchemaUpdate] - fetching database metadata
2011-01-18 15:13:30,915 INFO [org.hibernate.tool.hbm2ddl.SchemaUpdate] - updating schema
2011-01-18 15:13:30,927 INFO [org.hibernate.tool.hbm2ddl.DatabaseMetadata] - table not found: business_operations
2011-01-18 15:13:30,941 ERROR [org.hibernate.tool.hbm2ddl.SchemaUpdate] - Unsuccessful: create table sowa.business_operations 
2011-01-18 15:13:30,942 ERROR [org.hibernate.tool.hbm2ddl.SchemaUpdate] - 
Schema "SOWA" not found;

Turns out this bug is reported and open since 2006:

link. The solution The solution to this problem is to create the schema before hibernate’s hbm2ddl turns on. That would be easy with H2 if we could tell H2 to initialize itself like this:

database.url=jdbc:h2:mem:;INIT=RUNSCRIPT FROM 'src/main/resources/scripts/create.sql';

All seems nice, except H2 RUNSCRIPT FROM command doesn’t work with relative resources as you may expect. Fortunatelly INIT allows us to give any commands, not just point to a script, so this little change will solve the problem:

database.url=jdbc:h2:mem:;INIT=create schema IF NOT EXISTS sowa

Yeah, I know it’s obvious and simple stupid, but looking at all the questions on all the mailing lists in google I may have just saved a little bit of somebody’s time.

You May Also Like

Recently at storm-users

I've been reading through storm-users Google Group recently. This resolution was heavily inspired by Adam Kawa's post "Football zero, Apache Pig hero". Since I've encountered a lot of insightful and very interesting information I've decided to describe some of those in this post.

  • nimbus will work in HA mode - There's a pull request open for it already... but some recent work (distributing topology files via Bittorrent) will greatly simplify the implementation. Once the Bittorrent work is done we'll look at reworking the HA pull request. (storm’s pull request)

  • pig on storm - Pig on Trident would be a cool and welcome project. Join and groupBy have very clear semantics there, as those concepts exist directly in Trident. The extensions needed to Pig are the concept of incremental, persistent state across batches (mirroring those concepts in Trident). You can read a complete proposal.

  • implementing topologies in pure python with petrel looks like this:

class Bolt(storm.BasicBolt):
    def initialize(self, conf, context):
       ''' This method executed only once '''
        storm.log('initializing bolt')

    def process(self, tup):
       ''' This method executed every time a new tuple arrived '''       
       msg = tup.values[0]
       storm.log('Got tuple %s' %msg)

if __name__ == "__main__":
    Bolt().run()
  • Fliptop is happy with storm - see their presentation here

  • topology metrics in 0.9.0: The new metrics feature allows you to collect arbitrarily custom metrics over fixed windows. Those metrics are exported to a metrics stream that you can consume by implementing IMetricsConsumer and configure with Config.java#L473. Use TopologyContext#registerMetric to register new metrics.

  • storm vs flume - some users' point of view: I use Storm and Flume and find that they are better at different things - it really depends on your use case as to which one is better suited. First and foremost, they were originally designed to do different things: Flume is a reliable service for collecting, aggregating, and moving large amounts of data from source to destination (e.g. log data from many web servers to HDFS). Storm is more for real-time computation (e.g. streaming analytics) where you analyse data in flight and don't necessarily land it anywhere. Having said that, Storm is also fault-tolerant and can write to external data stores (e.g. HBase) and you can do real-time computation in Flume (using interceptors)

That's all for this day - however, I'll keep on reading through storm-users, so watch this space for more info on storm development.

I've been reading through storm-users Google Group recently. This resolution was heavily inspired by Adam Kawa's post "Football zero, Apache Pig hero". Since I've encountered a lot of insightful and very interesting information I've decided to describe some of those in this post.

  • nimbus will work in HA mode - There's a pull request open for it already... but some recent work (distributing topology files via Bittorrent) will greatly simplify the implementation. Once the Bittorrent work is done we'll look at reworking the HA pull request. (storm’s pull request)

  • pig on storm - Pig on Trident would be a cool and welcome project. Join and groupBy have very clear semantics there, as those concepts exist directly in Trident. The extensions needed to Pig are the concept of incremental, persistent state across batches (mirroring those concepts in Trident). You can read a complete proposal.

  • implementing topologies in pure python with petrel looks like this:

class Bolt(storm.BasicBolt):
    def initialize(self, conf, context):
       ''' This method executed only once '''
        storm.log('initializing bolt')

    def process(self, tup):
       ''' This method executed every time a new tuple arrived '''       
       msg = tup.values[0]
       storm.log('Got tuple %s' %msg)

if __name__ == "__main__":
    Bolt().run()
  • Fliptop is happy with storm - see their presentation here

  • topology metrics in 0.9.0: The new metrics feature allows you to collect arbitrarily custom metrics over fixed windows. Those metrics are exported to a metrics stream that you can consume by implementing IMetricsConsumer and configure with Config.java#L473. Use TopologyContext#registerMetric to register new metrics.

  • storm vs flume - some users' point of view: I use Storm and Flume and find that they are better at different things - it really depends on your use case as to which one is better suited. First and foremost, they were originally designed to do different things: Flume is a reliable service for collecting, aggregating, and moving large amounts of data from source to destination (e.g. log data from many web servers to HDFS). Storm is more for real-time computation (e.g. streaming analytics) where you analyse data in flight and don't necessarily land it anywhere. Having said that, Storm is also fault-tolerant and can write to external data stores (e.g. HBase) and you can do real-time computation in Flume (using interceptors)

That's all for this day - however, I'll keep on reading through storm-users, so watch this space for more info on storm development.