i have type of record:
- name varchar(128)
- description varchar(64)
- field varchar(32)
combination of (name, description) needs unique , name needs indexed fast search/retrieval. updates infrequent, large example might add 1 million records @ time, there plenty of reads throughout day.
i have on 200 million records these, possibly becoming 300 in future, when tried adding mysql, db has grown huge i'm looking @ 200gb+ indices , etc. there space efficient way structure data. db tech fine me, long not need lots of ram , uses less disk.
200 million * (128+64+32) = ~50gb of data. don't know how mysql stores data, i'm guessing creates index unique key(124+64), talking 50gb + ~index 40gb of data. there other overhead, possibly due fragmentation suggested below.
thanks, in advance!
both mysql , sqlite store text values in dynamically-sized records; table or index entries not padded.
you can avoid space needed index unique constraint making name/description columns primary key , using clustered index (requires innodb in mysql, or without rowid in sqlite):
create table mysqltable( name varchar(128), description varchar(64), field varchar(32), primary key(name, description) ) engine = innodb; create table sqlitetable( name varchar(128), -- sqlite ignores limits description varchar(64), field varchar(32), primary key(name, description) ) without rowid;
Comments
Post a Comment