欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  数据库

max_connections / shared_buffers / effective_cache_size_MySQL

程序员文章站 2022-05-15 16:25:49
...
bitsCN.com
Hello, I'm a Sun Solaris sys admin for a start-upcompany.  I've got the UNIX background, but now I'mhaving to learn PostgreSQL to support it on ourservers :)Server Background:Solaris 10 x86PostgreSQL 8.0.3Dell PowerEdge 2650 w/4gb ram.This is running JBoss/Apache as well (I KNOW the badjuju of running it all on one box, but it's all wehave currently for this project). I'm dedicating 1gbfor PostgreSQL alone.So, far I LOVE it compared to MySQL it's solid.The only things I'm kind of confused about (and I'vebeen searching for answers on lot of good perf docs,but not too clear to me) are the following:1.) shared_buffers I see lot of reference to makingthis the size of available ram (for the DB).  However,I also read to make it the size of pgdata directory.I notice when I load postgres each daemon is using theamount of shared memory (shared_buffers).  Our currentdataset (pgdata) is 85mb in size.  So, I'm curiousshould this size reflect the pgdata or the 'actual'memory given?I currently have this at 128mb
You generally want shared_buffers to be no more than 10% of availableram. Postgres expects the OS to do it's own caching. 128M/4G = 3% seemsreasonable to me. I would certainly never set it to 100% of ram.
2.) effective_cache_size - from what I read this isthe 'total' allowed memory for postgresql to usecorrect? So, if I am willing to allow 1GB of memoryshould I make this 1GB?
This is the effective amount of caching between the actual postgresbuffers, and the OS buffers. If you are dedicating this machine topostgres, I would set it to something like 3.5G. If it is a mixedmachine, then you have to think about it.This does not change how postgres uses RAM, it changes how postgresestimates whether an Index scan will be cheaper than a Sequential scan,based on the likelihood that the data you want will already be cached inRam.If you dataset is only 85MB, and you don't think it will grow, youreally don't have to worry about this much. You have a very small database.
3.) max_connections, been trying to figure 'how' todetermine this #.  I've read this is buffer_size+500kper a connection.ie.  128mb(buffer) + 500kb = 128.5mb per connection?
Max connections is just how many concurrent connections you want toallow. If you can get away with lower, do so.  Mostly this is to preventconnections * work_mem to get bigger than your real working memory andcausing you to swap.
I was curious about 'sort_mem' I can't find referenceof it in the 8.0.3 documentation, has it been removed?
sort_mem changed to work_mem in 8.0, same thing with vacuum_mem ->maintenance_work_mem.
work_mem and max_stack_depth set to 4096maintenance_work_mem set to 64mb
Depends how much space you want to give per connection. 4M is prettysmall for a machine with 4G of RAM, but if your DB is only 85M it mightbe plenty.work_mem is how much memory a sort/hash/etc will use before it spills todisk. So look at your queries. If you tend to sort most of your 85M dbin a single query, you might want to make it a little bit more. But ifall of your queries are very selective, 4M could be plenty.I would make maintenance_work_mem more like 512M. It is only used forCREATE INDEX, VACUUM, etc. Things that are not generally done by morethan one process at a time. And it's nice for them to have plenty ofroom to run fast.
Thanks for any help on this.  I'm sure bombardment ofnewbies gets old :)-William
Good luck,John
bitsCN.com