i have spring boot application using postgres database. when application deployed need run transactional operation uploads zip file used populate database. application checking duplicate rows before inserting them (because users can upload duplicate data should ignored).
the problem having first time upload file, thought indexes created, not being used when checking existence of row. theory happens because query plan deciding not use index because checking original statistics, show tables empty. if upload small zip file first, problem goes away because tables have data.
i have 2 questions. first, theory correct or there other reason behaviour? also, if so, there way force postgres update query plan uses @ predefined interval within same transaction , can done using jpa? ideas appreciated.
just in case runs issue, i'll post solution found. appears theory correct. queries not use indexes until statistics collected. 1 way force call analyze after number of rows have been written database. can using native query this:
entitymanager.createnativequery("analyze " + tbl).executeupdate();
you can wrap call in try catch , ignore exceptions might occur if change database engine. couldn't find way of doing in database-independent way approach works fine , initial upload performs expected.
Comments
Post a Comment