thoughtbot Giant Robots Episode #77
Harold Giménez, Heroku postgres leader
Heroku Postgres 2.0
rollback (reply log)
stats
15 20people
heroku
run postgres
heroku postgres team
before thoughtbot
data analytics
r
many database
big announcement
postgres2.0
rollback
data to at the point
create database fork
take snapshot every night
every60seconds
head log
replay log
pgbackup
impossible
database big
logical backup
pgbackup for exporting
higher ability
stand by
state machine
dyno
2min timeout
kill primary completely
stand by
repromote
how much availability
downtime
decreasing
every moment
writing code
how improve team
communicate
conversation
join heroku
few month after aquired
quite happy
important thing
improve
share
daily stand up
email
weekly planing meeting
thursday non distractive day
friday beer club
different yeast hop
guitar play
recent trip
stay up to date postgres
postgres community
patch
funding project
lot of smaller
pg day conference
others come from other database
linux kernel
major biggest
single cluster database
comes with cost
sidekiq
hold on connection
multi process
a dyno with many connections
postgres will not be able to establish new connection
scale dyno
connection counter
leak connection
active record
dyno doesnt restart
pg stat activity
metrics pipeline
tell log
bad lock management
metrics
heroku postgres log
went down
open support ticket
expect bring back
automation cannot fix the problem
prior to postgres2 release
once or twice a page
batch process
investigation
open up ticket
profound operation
keep your db healthy
havnt used ever
write new row
data table
index
reindex
new strategy
vacuum
tapples
vacuum daemon
aggressive
we already are
never think
broad table
first strategy
state machine
no state column
moving more
revording of information
strategy
rollback
in terms of
giant table
parent table
childs
trigger
constraints
transparent
million rows
postgres doing binary search
binary tree index
btree structure unbalance
solution reindex
concurrently
various strategy
primary key
caching
various level
shared buffer
heavy
operating system page
slow queries
recommend cache
below 99 %hit
access patterns
gets cached
80%
might need bigger thing
whats going on postgres
cache beautiful
dont use memcached
rely on postgres
occasionally
heroku postgres team
dedicated machine
two database plans
bigger instance
database cluster
share resource
same box
production level
plan
ika mecha
intutisting
never say you should upgrade
send email
sales team
70% cache hit
money
should check
most people
postgres 9.2
pstats statement
keep track
time executed
one view
average time
ecplain
know how to manage problem