There's a front-page diary here at the moment entitled:Team Coleman Fakes Website Crash which in turn links to some interesting analysis by the fine folks at MN Publius.
Based on those observations, I'm going to argue that the Coleman technical people are either (a) lying or (b) incompetent. Or maybe (c) both.
I'm not going to recapitulate the material in the (currently front page) diary entry here or the MN Publius entry, but I will summarize: the Coleman people claimed, earlier today, that their web site was crashed because it attracted tens of thousands of hits -- putatively from voters trying to find out their ballot status.
The available evidence doesn't support this.
Let me give you a little primer on how web sites and DNS and networking all play together to bring your pages like this one. I'm going to omit almost all of the complexity, gloss over many details, and cut a few corners -- so everyone who's about to "correct" my deliberate omissions, put your red pens down or I'm gonna have to go all Foghorn Leghorn on you.
A bunch of things have to work properly in order for your web browser to show you a web page. First, you've got to provide a valid domain name (I say, I say, son, put the red pens down, I really do know that browsers can handle IP addresses, now go run along and play with that old hound) and a valid protocol -- usually HTTP:
http://colemanforsenate.com/
The next thing that happens is that your system will query DNS -- the Domain Name Service -- to find out the IP (Internet Protocol) address of the site you want to go to. This spares you from having to remember and type:
http://208.42.168.197/
which is onerous, to say the least. DNS is responsible, among a number of other things, for mapping hostnames, like colemanforsenate.com (which also happens to be a domain name) to IP addresses.
This has to happen because your computer has no idea how to talk to a domain name; it only knows how to talk to IP addresses. DNS thus exists in part to provide a layer of abstraction for us feeble humans, who are pretty good at remembering things like dailykos.com and google.com but very bad at remembering 208.42.168.197.
Your system will then, on behalf of your web browser, send a request to that IP address, on port 80 (I'll get to that) which looks like this:
GET / HTTP/1.0
That request amounts to "gimme the home page". It's sent on port 80 because systems which offer services (like web servers) offer those services on ports, whose numbers are defined by standards: 80 is web service, 25 is email, etc. Think of ports like drive-thru windows: gotta go to the right one to get the right service. Think of stuff like "GET / HTTP/1.0" as the native language of whoever it is that's staffing that particular window: speak the wrong language, you get nothing.
The next thing that happens is that the web server which is running there (if it's running and sane) will decide what to do with your request. Usually it'll return the web page you've asked for. Sometimes it'll deny your request because the page is private or doesn't exist. Sometimes it will ask some backend resource like a database to look a bunch of things up and then it will assemble those to present a page to you. (This is what happens here at DailyKos: every diary page and all the comments are in a database; when you asked for this diary entry, all of that got looked up, pulled out, glued together, and shipped to you.)
The web server, though, however it does it, ships the page back to your computer, which hands it over to your web browser, which formats it and shows it to you. Thank you Sir Tim Berners-Lee and a cast of millions.
So now (FINALLY) we get to the Coleman web site. What they're saying is that they crashed today because they were overwhelmed with traffic -- and given that some of the discussion at MN Publius discussed MySQL errors (MySQL is a database and almost certainly what they built their back end with) it's not unreasonable to guess that it either really was overwhelmed...or they broke it.
It's what they did about it that's odd. They changed the address record in DNS from 208.42.168.197 to 1.1.1.1 -- which is a really bizarre choice. (And meanwhile, they left the site up, as a commenter at MN Publius observed, when he ran telnet 208.42.168.197 80 and was able to manually issue HTTP GET requests.) The reason this is bizarre is that (a) it's the wrong fix for the problem they claim they had and (b) it's not going to work very well anyway.
If the problem is that the backend database was being bashed by heavy traffic, then the obvious solution is to turn off that bit of functionality and only that, and let the rest of the web site keep on running. This is so well-known that some sites have a "kill switch" handy that administrators can throw whenever the need arises, freeing them from trying to remember the exact steps to use while in crisis mode.
If the problem was that the web server itself, the front end, was overwhelmed...then it would not have been functioning normally when the MN Publius commenter connected to it. (Which rules out, among other things, possible overload due to a link posted on Drudge.)
If the problem was network congestion, then the web server would have been slow to respond to the telnet connection, and slow to return the requested page.
I could go on, but here's the point: in none of these cases is there the slightest reason to change the IP address. It's not the right fix, because it's not the problem.
And it wouldn't work anyway -- here's why. DNS holds a huge amount of data -- and most of it doesn't change that often. So built into the DNS infrastructure of the entire Internet is a lot of caching. The idea is that once your system has asked the question "What is the IP address of colemanforsenate.com" and gotten an answer, it shouldn't need to ask the same damn question 18 more times in the next 4 minutes and get the same damn answer. So caching takes place at multiple levels -- probably on your system, in the caching DNS servers at your ISP, etc.
Which means that if, let's say, the Coleman web site was blown down because 100,000 people just hit it from their web browsers, that changing the address record for it in DNS really isn't going to do much because they've all cached the address and aren't going to query for it again any time soon. So while the DNS server may now be handing out "1.1.1.1" as the address to anybody new who comes along, everyone who's already asked has 208.42.168.197 and isn't going to bother re-asking for a while. (Yes, there's a mechanism (TTL) to control how long an answer is valid for, but using it effectively requires anticipating, not reacting, to problems like this.) So every time one of those people hits the "reload" button in their browser, they're going to hit the site again -- at 208.42.168.197. Even if they reboot (and thus likely clear the DNS cache on their local system) they will almost certainly query their ISP/university/company caching DNS servers when they try again, and those servers will cheerfully return...208.42.168.197.
So what should they have done? Well, like I said above, if they were really seeing their backend database get pummeled, they should have switched off that function for a while, and stuck up a quick note that said "we've disabled this, come back later". If it was worse than that, say, the entire site was being hammered, then they should have configured the web server to return a very simple, very short, no-graphics static page that said "We're getting hammered, please be patient while we work out a fix" for ANY page request on the site. And if it was still worse than that, if the web server couldn't even issue a simple text-only static page fast enough to deal with the onslaught, then they should have asked their web host to configure its perimeter routers to just drop all incoming web requests for a while. Brute-force yes, but highly effective, since a move like that cuts off all traffic and brings welcome silence. Gradually re-enabling transmission of those requests (possibly with rate-limiting, possibly only for parts of the 'net) provides a way to begin providing at least some service to at least some people without taking everything out again.
So -- to take this back where it started -- either we're not getting the whole truth about what happened, and part of what we're not getting explains why they chose to switch the DNS A record to point to 1.1.1.1 (in IANA-reserved space, by the way); or the people running that site have no idea how to deal with one of the routine problems that faces web site administrators every day.