Seriously, I can't imaging doing something like research or writing PhD thesis and not having at least one off-site backup.
Just get yourself couple of pendrives and rotate them somewhere else than where you live and work.
I personally have more data. For my backup needs I use 3 4TB drives, a USB HDD docking station and a script to detect when one of the drives is inserted and make backup of data from all places in my home that I need backed up.
I have one drive inserted in the station (once inserted, it will attempt one backup per day). One standing by or in transit and one with the family.
At any point in time at least one of those drives is safe with my family which I visit regularly. When I drive there I yank the backup drive from the station, immediately replace it with the standby drive and take the fresh backup with me. When I am going back home, I take the one drive that was previously kept by my family to use it as a standby.
It is not complicated, you can do the same with a pendrive.
There is no paid software involved, I wrote the script over the years but it is simple and has no extra features. It only detects when the drive is inserted (each has a file with info), makes full/incremental backups (full backup automatically after inserted, incremental every night) and removes old backups and linked increments as needed. Restore is manual as files are just tar.
To be fair, this is not data on your own PC or laptop where you know that the disk dies, the data is gone. You'd expect a IT setup by a professional university to have redundant backups for you, so that nothing could go wrong even if something breaks or gets deleted. But depends on the university of course, and it must be hard for students to know how qualified the IT department of their university is.
I've never seen that. The only moment I needed a university backup was when a university unit specialized in IT (making websites, databases...) lost 3 months of work on a system they were managing and that we needed to migrate from (to get rid of them and their incompetence). The deal was for them to give us the data on that day, theu just nuked the db, and recycled the hard drives. There were supposedly backups University wide, so I just asked the service handling it (another one) to send me the backups. Thats when they discovered that no backup worked for the last few months for the whole university... Don't ever trust your university with data. Make yourself multiple backups, encrypt them and keep a copy of the key/password in a safe place. I have horror stories linked to each of these reasons. One fun one was the single backup that was plugged when a user opened a "fedex" email and got a cryptolocker that locked all the network drives and the backup. Thankfully the person managing the networked drives had off site backups (and archiving).
> it must be hard for students to know how qualified the IT department of their university is.
The trick is to look at the most productive research groups, in terms of funding and output, and see if they use the resources from the IT department or not.
You can't expect that,I'm afraid. We have tens of millions of people using all sorts of 'in the cloud' thinking it's as safe, as it can possibly be. Same principle in this case. I do agree on the quality of the uni staff thought.
TFA also talks about data stored in locally installed software; lots of scientific software I've seen puts data in a default location and you have to go out of your way to know where it is on disk.
Depending on what research you are doing (if it includes confidential information about people participating in your experiment for example), taking your own off-site backup is a quick way to get fired / thrown out of your PhD.
In this case the University should still provide a good way of you backing everything up of course, and you can still back up anything which isn't confidential.
They said they have shared drives and these are unaffected. Shared drives are typically backed up by IT. It looks like a perfect place to store backups for most people that don't have a lot of data.
> Seriously, I can't imaging doing something like research or writing PhD thesis and not having at least one off-site backup.
And yet people do this. Last year I read a newspaper article of a guy who worked 2 years on his PhD thesis but then forgot his laptop in a bus and now those 2 years of work are gone forever.
The real problem here is not just backups, though. It's a severe ignorance of technology and its limitations that leads to such carelessness. The old saying about advanced tech seeming like magic should be a warning. No sane person goes out into the rain without an umbrella thinking they just won't get wet because yesterday they didn't, as it's painfully obvious that water is wet. Yet when someone stores everything on a single point of failure they don't lose a second of sleep of what could go wrong, it's simply not in their perceived range of possibilities that e.g. a hard drive could fail.
People are very, very bad when it comes to proactively mitigating low probability disasters. It is not that it is complicated, although you might underestimate the challenge of having a decent and updated backup that can be restored to the layman inundated with a deluge of commercial offerings trying to outdo each other on interface obtuseness.
We seem to instinctively err on the side of avoiding any short term cost, no matter how potentially grave the consequences in the future if the probability of those becomes low or obfuscated.
You can see this not just in backups, but in lifestyle choices, business decisions, construction, natural resources, financial speculation ...
Any situation where you can say "it does not happen often, but if it does the results will be catastrophic", you can bet that the risks will be unhedged.
Heck, my kids are theoretically engineers (at least their degree says so!), but even buying BackBlaze for them, I can't even get them to hit Ctrl-S from time-to-time. I think their mom must have fucked the milkman, because she is smarter than this herself...
Very simple 'backup' is to email it to yourself. Ofc it's not a legit backup, but a Word file, even if it's 500 pages, won't take so much space.
Pendrive/USB ports may be locked. But you can always zip & email the very critical doc/xls files. This was you can also have 'versioning'.
Something that I assume people do, is journaling. So even if you DID lose 1-2-3 months worth of work (in academia) you can read your (handwritten) notes and will save a lot of time from remembering the key points, key resources, etc.
Either way, whoever wanted to wipe a couple of computers and managed to push the script to the full AD group.. I got no words. Article doesn't mention what OS they were sitting on, but I will go ahead and assume it's Windows, on an Active Directory, and the admin pushed the new 'think' to the wrong OU.
The cycle typically takes couple of weeks which should be enough for me to notice the problem and use one of the drives to restore the files.
I don't believe I am susceptible to a ransomware attack, though. All my machines including laptop are Linux and most of the software that contacts outside world (including browser, zoom, etc.) are running within dedicated containers or VMs.
Now, this does not make it safe against targeted attacks, but I did not plan it to stand a chance against really determined attacker. I just think no generic ransomware will expect to break out of container or VM.
Sounds like it has an RTO of “transit to and from mom/dad’s house” and a RPO of “however often I went to mom/dad’s house”
Those are above and beyond what I would personally need to avoid the threat of ransomware. If you need improvements, visit your family more often - win win :)
Everybody I know who works at a university has a personal rule which is, never let the IT department touch an important computer.
The workers are "required" to buy their own computers anyway, so it's not like the university has any say over how the computers are managed.
Everybody has a story about IT ruining their computer by installing some sort of unworkable security software. There's also a familiar story of IT coming in and wiping out a computer that runs some ancient scientific instrument that isn't supported by its vendor any more. And the vendor kindly sending them an unofficial copy of the ancient software because, well, because it's happened before.
To be fair, dealing with IT at a university must be a nightmare. My workplace takes the approach of providing all work computers, of a specific brand, and retiring the old ones before they turn into a support headache.
I can go one step further, based on what I've seen - "Shadow IT" that has its own help-desk line and ticketing system... Running their own LAN with interconnects to "main IT".
The end result is that nobody ever lets "main IT" near anything. Most people who have important work on the go used shadow IT. People who know what they're doing then run their own personal Shadow IT (but would use the real shadow IT if they had any issues like needing to reinstall a system, as that way it won't get connected to the main IT domain...)
You are absolutely correct - it's an absolute nightmare. Many university networks originate from the era of the flat LAN, where every system got a routable public IP, reachable via the internet. There are people running all kinds of servers and services and jobs across the network. None of them are documented, some might be relied upon by important internal or external things...
There's no standardisation of hardware generally. If you're lucky there's some managed desktops using standard configurations for computer labs used by students, and office administrative staff. Don't expect to be able to tell professors what they'll use - they're buying computers on their research grants for themselves and their teams. And their own NAS servers and everything. Heck the "shadow IT with a help-desk" was even offering storage and VMs on their shadow infrastructure.
I've seen people trying to use Windows XP on the network because some incredibly expensive piece of equipment needs it, and they would like to be able to browse the internet while they wait on it to do its work (!) Fortunately, shadow IT were on the lookout for that, and helped them set up a second computer on a supported OS, and get the XP box offline.
It is a nightmare yes. I used to run machines on a network isolated from IT's network. Before then they would come and ask to inspect every machine their IP or ARP scanner found. The problem is that they send a student that know nothing but extract info from machines... try to explain them what VM on bridged networks are etc. They have destroyed more data and computers (one we gave to them, windows takes 10 minutes to boot, and they told us it is fine just let it running, no kidding) than any student ever had. In term of users machines, rotating them is fine. However for instrument computers thats not doable, many labs still run win 3.11, xp and so on... Of course you could reverse engineer the custom acquisition cards and their dos driver, but you would also have to deal with the custom oracle database and the parallel port dongle and the weird timing hacks that make the software only run on machines with a specific graphic card...
I've seen a whole high performance shadow network running behind NAT before, in order to get around the issue you describe.
It worked fairly well as the NAT router was presenting a WAN MAC address from a device that had already been "approved". Main reason was to get local 10Gbe available when the official LAN was still 100 Mbit (!!), and to enable new devices to be connected to the network without going through whatever official process was in place.
I'd never risk letting official IT people image a system - I've had to send people with laptops that were missing graphics drivers to "shadow IT support" to get their system properly installed by someone that knows how to clean install a system and install the necessary drivers.
> There's also a familiar story of IT coming in and wiping out a computer that runs some ancient scientific instrument that isn't supported by its vendor any more.
When I was a student at VUW, the institution this article is about (early 2000s), computer science had its own, separate IT system. With staff. Who were legendary. I see the head sysadmin from back then is still there...
No "shadow IT," this was completely official and above board.
Everything that could be was NetBSD.
Compsci had its own network range within the university's /16.
I remember the ease of dealing with them, compared to the battle to get one network port livened so the computer club could expand its nascent WiFi network to the quad. Heck, within compsci land, the computer club managed to get a server with a static IP and even got it exposed to the internet.
The people whose literal jobs it is to provide a service and protect the data, lost it. This seems to have been glossed over in the comments - I guess, none of you want the responsibility of doing your job properly, and don't deserve to keep it.
Blaming the victims is not on. Assuming the victims should have done a better job than paid and trained professionals is not on. Assuming the victims are all CS majors is not on.
"Users should know better!" you scream, into a world where currently almost everything stored locally is already safely shuffled to the cloud to be retained even if your desktop dies (Windows 10 OneDrive, and Apple iCloud, and other solutions like Dropbox, Box). Except, obviously, at this university.
> Assuming the victims are all CS majors is not on.
A large part of VUW's research focus covers law and politics [1], so this statement rings quite true. I would not expect that all researchers even in topics adjacent to IT such as physics and chemistry would understand prior to this event that IT could remote wipe their computer, where the limits of local and network storage lie, or what data storage locations are apparently their personal responsibility to back up.
When I was younger, the first time I learned to ski, my cousin (who was teaching me) shoved me over before we hit the bunny slope and taught me how to get up.
I now apply this "What's the recovery from 95th percentile occurrence?" thinking to lots of new things.
Perhaps it should be culture in many departments to intentionally harm someone at small scale to insulate them from large scale. An inoculation of sorts.
I think if I have children, perhaps it will be worth having an intentional disk failure on some low cost thing to teach the importance of backups. Maybe have the hardware lose something like a save game in a game they've barely started. Hard not to make it some sort of pathological fear while still instilling the value.
One fictional, watching PCU and having a "haha that's funny oh crap" realization. One personal, when my HDD (in a circa 2000 PC) failed catastrophically (came to a grinding halt), fortunately I didn't have anything critically important on it at the time. One collective experience, a recreation of the PCU incident at a programming contest in a makeshift lab. Fortunately my computer was fine, and my team came out ok (3rd or 4th, memory fuzzy now) thanks to that and my then habitual use of C-s. My teammates' computers were also fine, but apparently they thought they only needed to hit C-s every 30+ minutes and lost a lot of work.
Of course, I also grew up with more unreliable systems (ever had someone put a magnet near your floppy disks?) so I'd already internalized a lot of the idea of being conscientious of my backups, but it still took a couple near catastrophic experiences to turn it into a true habit and routine.
These are interesting anecdotes and it is curious to see that your response was to learn that backups were important. Seems like your ability to learn there was superior to mine.
For my part, I, too, grew up with unreliable hardware¹ and each escape actually enhanced deviance. Every time I got away with not losing data because I found it somewhere else or found it unnecessary made me act closer to the wire. I'd risk more each time.
One time I even saw a friend lose his sister's paper (causing her to have to rewrite a significant amount) and it did nothing. I even lost data due to power loss on XFS². Did nothing.
Then one day I just started getting nervous about data loss and it's hard for me to determine what it was. Now everything is backed up, but I wish I knew what it was.
It's the fact that I had this strange reaction to encountering data loss that makes me hesitant to recommend the strategy I espoused in GP.
¹ all the same floppies, low quality CD-RWs, old spinning disks, and to compound all this I had poor quality power that would intermittently fail
² the famous file truncation, if you were around back when Reiser was FS dev and not murderer
On my first computer,I managed to format the wrong partition. Nearly year's worth of stuff was gone. Later on did something equally stupid and lost all the photos I had from my teen years. Now I've got copies of my photos on AWS, OneDrive, portable HDD, and the laptop, and I still panicking what if Amazon would go bust kind of things.
I don't really know Windows enterprise distributed management, but from my experience at a former employer my impression is that some directories on a Windows computer end up "managed" and others are purely local.
We were always told to only put important things in the certain "managed" areas, because they would only be backed up from those directories (including for compliance purposes). The files in "managed" directories would also magically show up when you logged into your login on any managed Windows desktop, the idea was that the local HD was disposable.
My guess is that, ironically, those were exactly the directories from which you would have lost your files in this case.
One would think they would have backups of everything, but if they're telling people not to log in to the computer to potentially avoid overwriting deleted bytes, sounds like maybe not?
I’m comfortable holding PhD students to a higher standard. I sympathize with the amount of work ahead of these students, but I don’t accept any blame except on themselves.
So much for empathy. Not all students are as technically literate as HN readers or in the same position to understand or manage the data they deal with. Let's not rush to blame them.
Some students and staff likely use specialised (often very very expensive) software that locally process gigabytes of data, e.g. in the fields of biology and chemistry. These datasets or simulations will be in proprietary formats buried deep in some folder somewhere or in an even more inaccessible database hidden from the user. The PCs in question could also be physically tied to hardware systems in a lab. They're not going to expect that machine to be randomly wiped while working on the data, possibly for weeks at a time. They don't know where the data is. At best they might export the data (a feature which probably doesn't exist for most programs) and archive it at the end of their project. This IT screwup will hurt them a lot.
It's easy not to realise data is stored inside of a particular program, rather than more easily backed up Documents folders etc. When I was very young (13-ish, year 2000) I lost primary school photos because I didn't realise that they were stored inside the Program Files directory of the Kodak camrea download software I was using and not inside of the Documents and Settings directory.
Ever since then I now backup and image the entire HDD and I have almost literally terrabytes of extra OS images "just in case" I forgot or lost something. Hasn't really happened yet. Except I still can't find my Bitcoin from 2010. Doh.
At a small research facility long ago, we were reprimanded for not storing our data on the server. “What if it crashes and your work is lost!” Within 3 weeks the server had a harddisk crash. There were no backups.
Those who do backups and those who will...
Seriously, I can't imaging doing something like research or writing PhD thesis and not having at least one off-site backup.
Just get yourself couple of pendrives and rotate them somewhere else than where you live and work.
I personally have more data. For my backup needs I use 3 4TB drives, a USB HDD docking station and a script to detect when one of the drives is inserted and make backup of data from all places in my home that I need backed up.
I have one drive inserted in the station (once inserted, it will attempt one backup per day). One standing by or in transit and one with the family.
At any point in time at least one of those drives is safe with my family which I visit regularly. When I drive there I yank the backup drive from the station, immediately replace it with the standby drive and take the fresh backup with me. When I am going back home, I take the one drive that was previously kept by my family to use it as a standby.
It is not complicated, you can do the same with a pendrive.
There is no paid software involved, I wrote the script over the years but it is simple and has no extra features. It only detects when the drive is inserted (each has a file with info), makes full/incremental backups (full backup automatically after inserted, incremental every night) and removes old backups and linked increments as needed. Restore is manual as files are just tar.
They very likely do. On the M: and Z: drive.
As far as I can see this is just IT saying “if you stored your data somewhere you shouldn’t have, it’s now gone”.
But of course they can’t say that, so instead they have to help people recover from their mistake.
Or maybe the mistake was thinking people would actually do what you told them instead of whatever was more convenient.
The trick is to look at the most productive research groups, in terms of funding and output, and see if they use the resources from the IT department or not.
Deleted Comment
In this case the University should still provide a good way of you backing everything up of course, and you can still back up anything which isn't confidential.
And yet people do this. Last year I read a newspaper article of a guy who worked 2 years on his PhD thesis but then forgot his laptop in a bus and now those 2 years of work are gone forever.
The real problem here is not just backups, though. It's a severe ignorance of technology and its limitations that leads to such carelessness. The old saying about advanced tech seeming like magic should be a warning. No sane person goes out into the rain without an umbrella thinking they just won't get wet because yesterday they didn't, as it's painfully obvious that water is wet. Yet when someone stores everything on a single point of failure they don't lose a second of sleep of what could go wrong, it's simply not in their perceived range of possibilities that e.g. a hard drive could fail.
We seem to instinctively err on the side of avoiding any short term cost, no matter how potentially grave the consequences in the future if the probability of those becomes low or obfuscated.
You can see this not just in backups, but in lifestyle choices, business decisions, construction, natural resources, financial speculation ...
Any situation where you can say "it does not happen often, but if it does the results will be catastrophic", you can bet that the risks will be unhedged.
Pendrive/USB ports may be locked. But you can always zip & email the very critical doc/xls files. This was you can also have 'versioning'.
Something that I assume people do, is journaling. So even if you DID lose 1-2-3 months worth of work (in academia) you can read your (handwritten) notes and will save a lot of time from remembering the key points, key resources, etc.
Either way, whoever wanted to wipe a couple of computers and managed to push the script to the full AD group.. I got no words. Article doesn't mention what OS they were sitting on, but I will go ahead and assume it's Windows, on an Active Directory, and the admin pushed the new 'think' to the wrong OU.
I don't believe I am susceptible to a ransomware attack, though. All my machines including laptop are Linux and most of the software that contacts outside world (including browser, zoom, etc.) are running within dedicated containers or VMs.
Now, this does not make it safe against targeted attacks, but I did not plan it to stand a chance against really determined attacker. I just think no generic ransomware will expect to break out of container or VM.
Those are above and beyond what I would personally need to avoid the threat of ransomware. If you need improvements, visit your family more often - win win :)
The workers are "required" to buy their own computers anyway, so it's not like the university has any say over how the computers are managed.
Everybody has a story about IT ruining their computer by installing some sort of unworkable security software. There's also a familiar story of IT coming in and wiping out a computer that runs some ancient scientific instrument that isn't supported by its vendor any more. And the vendor kindly sending them an unofficial copy of the ancient software because, well, because it's happened before.
To be fair, dealing with IT at a university must be a nightmare. My workplace takes the approach of providing all work computers, of a specific brand, and retiring the old ones before they turn into a support headache.
The end result is that nobody ever lets "main IT" near anything. Most people who have important work on the go used shadow IT. People who know what they're doing then run their own personal Shadow IT (but would use the real shadow IT if they had any issues like needing to reinstall a system, as that way it won't get connected to the main IT domain...)
You are absolutely correct - it's an absolute nightmare. Many university networks originate from the era of the flat LAN, where every system got a routable public IP, reachable via the internet. There are people running all kinds of servers and services and jobs across the network. None of them are documented, some might be relied upon by important internal or external things...
There's no standardisation of hardware generally. If you're lucky there's some managed desktops using standard configurations for computer labs used by students, and office administrative staff. Don't expect to be able to tell professors what they'll use - they're buying computers on their research grants for themselves and their teams. And their own NAS servers and everything. Heck the "shadow IT with a help-desk" was even offering storage and VMs on their shadow infrastructure.
I've seen people trying to use Windows XP on the network because some incredibly expensive piece of equipment needs it, and they would like to be able to browse the internet while they wait on it to do its work (!) Fortunately, shadow IT were on the lookout for that, and helped them set up a second computer on a supported OS, and get the XP box offline.
It worked fairly well as the NAT router was presenting a WAN MAC address from a device that had already been "approved". Main reason was to get local 10Gbe available when the official LAN was still 100 Mbit (!!), and to enable new devices to be connected to the network without going through whatever official process was in place.
I'd never risk letting official IT people image a system - I've had to send people with laptops that were missing graphics drivers to "shadow IT support" to get their system properly installed by someone that knows how to clean install a system and install the necessary drivers.
I know someone who does this. The answer to this immovable mountain of a problem is an unstoppable force -- throw students at the problem.
Always mount a scratch monkey.
http://www.jargon.net/jargonfile/s/scratchmonkey.html
No "shadow IT," this was completely official and above board.
Everything that could be was NetBSD.
Compsci had its own network range within the university's /16.
I remember the ease of dealing with them, compared to the battle to get one network port livened so the computer club could expand its nascent WiFi network to the quad. Heck, within compsci land, the computer club managed to get a server with a static IP and even got it exposed to the internet.
Blaming the victims is not on. Assuming the victims should have done a better job than paid and trained professionals is not on. Assuming the victims are all CS majors is not on.
"Users should know better!" you scream, into a world where currently almost everything stored locally is already safely shuffled to the cloud to be retained even if your desktop dies (Windows 10 OneDrive, and Apple iCloud, and other solutions like Dropbox, Box). Except, obviously, at this university.
A large part of VUW's research focus covers law and politics [1], so this statement rings quite true. I would not expect that all researchers even in topics adjacent to IT such as physics and chemistry would understand prior to this event that IT could remote wipe their computer, where the limits of local and network storage lie, or what data storage locations are apparently their personal responsibility to back up.
[1] https://www.vuw.ac.nz/research/strengths/research-focus
I now apply this "What's the recovery from 95th percentile occurrence?" thinking to lots of new things.
Perhaps it should be culture in many departments to intentionally harm someone at small scale to insulate them from large scale. An inoculation of sorts.
I think if I have children, perhaps it will be worth having an intentional disk failure on some low cost thing to teach the importance of backups. Maybe have the hardware lose something like a save game in a game they've barely started. Hard not to make it some sort of pathological fear while still instilling the value.
One fictional, watching PCU and having a "haha that's funny oh crap" realization. One personal, when my HDD (in a circa 2000 PC) failed catastrophically (came to a grinding halt), fortunately I didn't have anything critically important on it at the time. One collective experience, a recreation of the PCU incident at a programming contest in a makeshift lab. Fortunately my computer was fine, and my team came out ok (3rd or 4th, memory fuzzy now) thanks to that and my then habitual use of C-s. My teammates' computers were also fine, but apparently they thought they only needed to hit C-s every 30+ minutes and lost a lot of work.
Of course, I also grew up with more unreliable systems (ever had someone put a magnet near your floppy disks?) so I'd already internalized a lot of the idea of being conscientious of my backups, but it still took a couple near catastrophic experiences to turn it into a true habit and routine.
For my part, I, too, grew up with unreliable hardware¹ and each escape actually enhanced deviance. Every time I got away with not losing data because I found it somewhere else or found it unnecessary made me act closer to the wire. I'd risk more each time.
One time I even saw a friend lose his sister's paper (causing her to have to rewrite a significant amount) and it did nothing. I even lost data due to power loss on XFS². Did nothing.
Then one day I just started getting nervous about data loss and it's hard for me to determine what it was. Now everything is backed up, but I wish I knew what it was.
It's the fact that I had this strange reaction to encountering data loss that makes me hesitant to recommend the strategy I espoused in GP.
¹ all the same floppies, low quality CD-RWs, old spinning disks, and to compound all this I had poor quality power that would intermittently fail
² the famous file truncation, if you were around back when Reiser was FS dev and not murderer
I don't really know Windows enterprise distributed management, but from my experience at a former employer my impression is that some directories on a Windows computer end up "managed" and others are purely local.
We were always told to only put important things in the certain "managed" areas, because they would only be backed up from those directories (including for compliance purposes). The files in "managed" directories would also magically show up when you logged into your login on any managed Windows desktop, the idea was that the local HD was disposable.
My guess is that, ironically, those were exactly the directories from which you would have lost your files in this case.
One would think they would have backups of everything, but if they're telling people not to log in to the computer to potentially avoid overwriting deleted bytes, sounds like maybe not?
In fairness, it's madness not to make any backups of important data for an entire year. Hard drives die all the time.
Some students and staff likely use specialised (often very very expensive) software that locally process gigabytes of data, e.g. in the fields of biology and chemistry. These datasets or simulations will be in proprietary formats buried deep in some folder somewhere or in an even more inaccessible database hidden from the user. The PCs in question could also be physically tied to hardware systems in a lab. They're not going to expect that machine to be randomly wiped while working on the data, possibly for weeks at a time. They don't know where the data is. At best they might export the data (a feature which probably doesn't exist for most programs) and archive it at the end of their project. This IT screwup will hurt them a lot.
Ever since then I now backup and image the entire HDD and I have almost literally terrabytes of extra OS images "just in case" I forgot or lost something. Hasn't really happened yet. Except I still can't find my Bitcoin from 2010. Doh.