Quick survey: How do you upgrade your sites?

I'm interested in how folks handle upgrades to your code and databases
(not ColdBox specifically). I'd appreciate any feedback on how you
deal with these scenarios, and any others that are part of your dev
lives.

The basic issue is how you move code from development to other places,
how you keep the database(s) in sync with the code, and how you manage
the transition from the old code and schema to the new.

- The site may have visitors at any time, not under your control,
maybe lots of them.

- Files and db may be at a hosting company or someplace else that's
not local, so getting code onto the box requires FTP, RDS, DAV, or
something similar.

- For the same reason, modifying the db requires some kind of remote sql access.

- Maybe the trickiest part, unless you shut the site down, the moment
new code gets there, it's running against the un-upgraded db. If you
upgrade the db first, the old code runs against it.

So that's the question, how you deal with all this. Any thoughts would
be much appreciated. I'm in this situation too, and I've been thinking
about what tools I'd like to have available, so any ideas about
helpful infrastructure would be great too.

Thanks all,
Dave

Hi Dave,

Great question. I myself am just getting into using GIT and Continuous integration servers for testing, and I would be interested in hearing about other CB developers approaches for deployment to production. On a similar topic I recently came across an interesting article about how Twitter accomplishes their production deployments. Not sure if this is an approach that could be applied here, but it's worth a read.

Nolan

For a lot of small changes, I will hotfix them to production during the day, clear trusted cache, and reinit the framework. I've been lucky enough to work at companies who were fine with just taking the site offline for a few hours one night to do the code migration and testing for larger releases. That gives QA time to do some regression testing on the site before turning it back on.

If I absolutely had to do a major update without taking the site down, I would remove the servers from the load balancer one-at-a-time, update them, and put them back in. That doesn't quite take care of corresponding SQL changes, but the majority of those tend to be things like new columns or new tables. In those instances, you can safely migrate one prior to the other.

You can also script out database changes ahead of time and practice running them on your staging environment to minimize the amount of time migration actually takes. Trusted cache can also come in handy for CF since it allows you to move all of your CF code and it won't kick in until you clear the trusted cache. That trick doesn't work for JS or CSS though.

As far as tools that I use.
SQL Server: Red Gate SQL Compare and Red Gate Data Compare.
Web Server Code (CF, JS, CSS, images): Araxis Merge folder/file compare

Both of those tools are simply indispensable as far as I'm concerned.

Thanks!

~Brad

A combination of Version Control, in our case SVN, and deployment
scripts with ANT are the biggest players for us at the moment.

We tend to build things in milestones, which are basically snapshots
of the codebase at the time we want to release. SVN calls them tags.

We then run an ANT script, pick which Tag we want, the environment and
ANT does the rest. Basically it consists of exporting, archiving and
deploying the new code to the chosen environment.

We're not doing any database work at the moment but its something I'm
keeping in mind for our current project. ANT does have plugins that
allow you to connect and running SQL scripts against one though.

Could live without them now :slight_smile:

Cheers,
James

Good answers guys, keep 'em coming...

The Twitter approach is interesting, though since they're apparently
doing it inside a single datacenter (!?!), a number of the assumptions
they make about the network environment specifically aren't true in
the world(s) I live in. They're also trying to solve scaling problems
I've never had (so far?). Related, here's an interesting preso about
how Facebook deals with their scale:
   http://www.infoq.com/presentations/Facebook-Software-Stack

Note though, that as I said, this isn't the kind of problem i'm trying
to think about with this bit.

One specific question: Generally, is it ok to reinit the framework to
deploy an upgrade? I know on a large site, the first request after a
reinit can be quite slow. It seems unavoidable though, since handlers
and a bunch of other code objects are cached.

Dave

Dave, putting this in place has made our reinits much smoother:
http://groups.google.com/group/coldbox/browse_thread/thread/14b016e815be007a/50e05770039c2d30#50e05770039c2d30

Thanks Ken, interesting. I was wondering if the framework prevented
multiple requests from running renit at once. I gather from what you
did that, without your code, it doesn't, right? I would have thought
that that was a requirement. Sounds like any approach to the overall
updates scenario needs to include something like this, unless Louis
thinks it should be built into how fwrenit works.

Dave

So I gather you do reinit after code changes, yes?

How do you manage db updates?

Dave

I've had pretty good luck using this:

http://www.barneyb.com/barneyblog/projects/schema-tool/

And tying the schema sync into the framework re-init.

:Den

Thanks Den, looks interesting, will take a look at the code when I get
home tonight, can't now. Does it try to have your upgrades be
compatible with multiple db types (SQL Server, Oracle, MySQL, and
Postgres are the ones I care about so far)? Building something to tie
into the framework execution cycle is what I was thinking about.

I've also checked out migrations in CFWheels a little bit. They do
support multiple db types, and rails-style up- and down-grade, though
down seems a bit insane to bother to build that out for every upgrade,
when you probably won't use it and it's not always possible. They also
provide UI with templates for upgrades. There are things about it that
I don't want (their naming conventions I think), and I'm not sure I
care about templates, but it made me think about some things I wasn't,
and I will be taking a look at their code.

Thanks again,
Dave

Thanks Den, looks interesting, will take a look at the code when I get
home tonight, can't now. Does it try to have your upgrades be
compatible with multiple db types (SQL Server, Oracle, MySQL, and
Postgres are the ones I care about so far)? Building something to tie
into the framework execution cycle is what I was thinking about.

It's pretty much just a forward-versioning SQL script runner. Meaning
you can roll things forward, but not revert them once they're at the
correct level.

A simple concept, but powerful. Uses numbered scripts, and a table in
the DB which tracks what number the DB is at. When you run the schema
tool, if the DB is at a lower number than the available scripts, it
runs the scripts in order, until it is at the "latest" script number.

It's also a handy way to keep multiple developers in sync. Pull down
the new code, the new SQL scripts... re-init, and bam! in sync with
the latest db schema, even if it's been 6 months since the last code
pull.

It's "by hand" powered SQL, so any multi-db support needs to be coded
by hand, but that's totally doable, so long as you use the cfc-based
SQL scripts, vs. just using numbered .sql files, which it can also
handle.

Barney won a Wii for it in a contest, and I agreed with the judges--
it was one of the most useful contributions.

I've also checked out migrations in CFWheels a little bit. They do
support multiple db types, and rails-style up- and down-grade, though
down seems a bit insane to bother to build that out for every upgrade,
when you probably won't use it and it's not always possible. They also
provide UI with templates for upgrades. There are things about it that
I don't want (their naming conventions I think), and I'm not sure I
care about templates, but it made me think about some things I wasn't,
and I will be taking a look at their code.

That never hurts!

I've looked at lots of different versioning systems for databases
(liquibase, etc.), and non of them have very elegant "down", or,
reverting, capabilities. Which is understandable, due to the nature
of the beast.

Force be with you, esse! =)

:Den

@Dave,

"I was wondering if the framework prevented multiple requests from
running renit at once." - Basically init and reinit are the same
thing. Fundamental is loadColdbox(), called at app init and during
reloadchecks(). You could take this out, but then you would have to
init (loadcoldbox) manually, which would be disastrous when your site
reboots at 3am and your phone is on vibrate. (Coincidentally, the only
time the site goes down is when I leave my phone downstairs or on
vibrate :wink:

"unless Louis thinks it should be built into how fwrenit works" - I
felt the same way until Luis made me think about how it really works.
Your Application.cfc has to call reloadChecks() at request start, so
the only way NOT to hit reinit multiple times is to NOT call
reloadChecks() at the application level, which is what that code
does. Bottom line: You can't build a check into CB if CB is not yet
running.

"So I gather you do reinit after code changes, yes?" - Yes, we cache
our handlers, so it is necessary.

"How do you manage db updates?" Scheduled downtime.

Soon we will be using a load balancer along with DB replication. In
this instance, we COULD:

1. shunt all traffic to a single web server and a single DB instance
2. update the offline box and DB
3. switch all traffic to the new code/db
4. update the other boxes
5. turn replication back on
6. wait for replication thumbs-up
7. turn load balancer back on

The hard part is syncing transactions that happened on the old db to
the new db during phase 1 and 2. There are ways to transform data
when coming into the new db schema during replication, but in my
opinion, that is way too much work when a normal code change / db
change takes just a few minutes.

Scheduled down time is my friend. Even 45 minutes per month is above
five nines. Any manager that tells you scheduled downtime is
unacceptable needs to pony up for LOTS of resources (both human and
machine). :slight_smile:

...

The hard part is syncing transactions that happened on the old db to
the new db during phase 1 and 2. There are ways to transform data
when coming into the new db schema during replication, but in my
opinion, that is way too much work when a normal code change / db
change takes just a few minutes.

I played with using version based DSNs, ex: "myDSN_1.2", and then
setting the previous version's DSN to read-only, but that wasn't
really viable.

Scheduled down time is my friend. Even 45 minutes per month is above
five nines. Any manager that tells you scheduled downtime is
unacceptable needs to pony up for LOTS of resources (both human and
machine). :slight_smile:

Plus one (million)! =)

:Den

We also have something like this that I wrote a few years ago, it too is
only a forward going script. Mine has the option of serialising the object
or storing the information in the database, I wasn't aware of Barneys when I
wrote mine.

But I want to say that the best tool I have seen so far to date is
http://www.liquibase.org/ I have used this on a few personal sites before
now using ORM.

Regards,
Andrew Scott
http://www.andyscott.id.au/

From: coldbox@googlegroups.com [mailto:coldbox@googlegroups.com] On
Behalf Of denstar
Sent: Saturday, 6 November 2010 6:33 AM
To: coldbox@googlegroups.com
Subject: Re: [coldbox:6560] Re: Quick survey: How do you upgrade your
sites?

> Thanks Den, looks interesting, will take a look at the code when I get
> home tonight, can't now. Does it try to have your upgrades be
> compatible with multiple db types (SQL Server, Oracle, MySQL, and
> Postgres are the ones I care about so far)? Building something to tie
> into the framework execution cycle is what I was thinking about.

It's pretty much just a forward-versioning SQL script runner. Meaning you
can roll things forward, but not revert them once they're at the correct

level.

A simple concept, but powerful. Uses numbered scripts, and a table in the
DB which tracks what number the DB is at. When you run the schema tool,

if

the DB is at a lower number than the available scripts, it runs the

scripts in

order, until it is at the "latest" script number.

It's also a handy way to keep multiple developers in sync. Pull down the

new

code, the new SQL scripts... re-init, and bam! in sync with the latest db
schema, even if it's been 6 months since the last code pull.

It's "by hand" powered SQL, so any multi-db support needs to be coded by
hand, but that's totally doable, so long as you use the cfc-based SQL

scripts,

vs. just using numbered .sql files, which it can also handle.

Barney won a Wii for it in a contest, and I agreed with the judges-- it

was one

Liquibase seems very interesting Andrew, thanks, wasn't aware of it
(though ti was mentioned here earlier).

DB change management is only part of the scenario I was thinking of
addressing though; I'll write up a separate post on that when I'm a
tad less beat.

Dave

You're welcome.

Yeah I was in a hurry earlier, but to answer you with my way or how I have
done it in the past. But before I do this is something that I have tuned to
my needs and what is needed as well, and will be dependent on a few factors
in how you do your SDLC.

With that in mind I have and use Subversion, as outlined in the book
Pragmatic Version Control using Subversion. I suggest anyone who is serious
about their SDLC actually read this book it is worth the read and
investment.

So with this in place I have a testing branch, that when it is gone through
the testing stage it gets released to here.

The reason for this separate branch is so that people can continue to
develop in the trunk on new and upcoming things, bugs etc. It also means
that the push is as painless as possible when going to production as well.
In an idealistic world this would be an app that you would just push in one
hit, sometimes this is not possible but this also depends on your SDLC and
other contributing factors as well.

Other tools that I have used in the past when it comes to this are, Red-Gate
SQL bundle yes it is a little pricy but it is well worth the money. It even
has an option where you can sync the database to files into your application
for revision control if need be.

Which you might be interested in reading -
http://www.andyscott.id.au/2009/4/2/Version-control-database-schema-the-SQL-
Compare-way-from-RedGate

Another tool that I have liked and prefer is by Scooter Software called
Beyond Compare, I tend not to use this as much as the way I develop has
changed over time to my first listed approach. But this tool and the
Red-Gate SQL Bundle combined can be a very valuable in your arsenal.

In the end it will depend on what you have in place and what you are
prepared to move to, but I do highly recommend the approach in the
Subversion book, it takes a bit of discipline to remember to fix bugs in the
test branch and then merge back to trunk and work out of this branch and not
the trunk, but if you can see the benefits you will not look back.

Regards,
Andrew Scott
http://www.andyscott.id.au/

From: coldbox@googlegroups.com [mailto:coldbox@googlegroups.com] On
Behalf Of Dave Merrill
Sent: Saturday, 6 November 2010 1:50 PM
To: coldbox@googlegroups.com
Subject: Re: [coldbox:6566] Re: Quick survey: How do you upgrade your
sites?

Liquibase seems very interesting Andrew, thanks, wasn't aware of it

(though

ti was mentioned here earlier).

DB change management is only part of the scenario I was thinking of
addressing though; I'll write up a separate post on that when I'm a tad

less