Readit News logoReadit News
dsego · 4 years ago
> but at the time the code seemed completely correct to me

It always does.

> Well, it teaches me to do more diverse tests when doing destructive operations.

Or add some logging and do a dry run and check the results, literally simple prints statements:

    print("-----")
    print("Downloading videos ids from url: {url}")
    print(list of ids)
    ...
    ...
    ...
    # delete()  dangerous action commented out until I'm sure it's right
    print("I'm about to delete video {id}")

    print("Deleted {count} videos") # maybe even assert
    ...
Then dump out to a file and spot check it five times before running for real.

dkersten · 4 years ago
I was involved with archiving of data that was legally required to be retained for PSD2 compliance. So it was pretty important that the data was correctly archived, but it was just as important that it was properly removed from other places due to data protection.

This is basically the approach that was taken: log before and after every action exactly what data or files is being acted on and how. Don't actually do it. Then have multiple people inspect the logs. Once ok'd, run again, with manual prompts after each log item asking to continue, for the first few files/bits of data. Only after that was ok'd too did it run the remainder.

In other things I've worked on, I've taken the terraform-style plan first, then apply the plan approach, with manual inspection of the plan in between.

tauwauwau · 4 years ago
Once we get used to doing same thing multiple times a day, it doesn't matter if the log shows that we're about to take a destructive action, we'll still do it. Only thing that is foolproof is to not take the destructive action because people make mistake, it's human nature. I don't know how this can be implemented, may be encrypt the files, take a backup in some other location (which may not be allowed).

Multiple reviewers here didn't catch the mistake

https://www.bloombergquint.com/markets/citi-s-900-million-mi...

dredmorbius · 4 years ago
mv then rm is another idiom. So long as you have the space.

For database entries, flag for deletion, then delete.

In the files case, the move or rename also accomplishes the result of breaking any functionality which still relies on those file ... whilst you can still recover.

Way back in the day I was doing filesystem surgery on a Linux system, shuffling partitions around. I meant to issue the 'rf -rm .' in a specific directory, I happened to be in root.

However ...

- I'd booted a live-Linux version. (This was back when those still ran from floppy).

- I'd mounted all partitions other than the one I was performing surgery on '-ro' (read-only).

So what I bought was a reboot, and an opportunity to see what a Linux system with an active shell, but no executables, looks like.

Plan ahead. Make big changes in stages. Measure twice (or 3, or 10, or 20 times), cut once. Sit on your hands for a minute before running as root. Paste into an editor session (C-x C-e Readline command, as noted elsewhere in this thread).

Have backups.

crispyambulance · 4 years ago

  > ... Then have multiple people inspect the logs. Once ok'd, run again, with manual prompts after each log item asking to continue...
This sort-of reminds me of some "critical" work I had to do a couple of decades ago. I was in a shop that used this horrifically tedious tool for designing masks for special kinds of photonic devices-- basically it was tracing out optical waveguides that would be placed on a crystal that was processed much like a silicon IC.

The process was for TWO of us to sit in front of computer and review the curves in this crazy old EDA layout tool called "L-edit" before it got sent to have the actual masks made (which were very expensive). It took HOURS to check everything.

The first hour was tolerable but then boredom started to creep in and we got sloppy. The whole reason TWO people got tasked with this was because it was thought that we would keep each other focused-- 2 pairs of eyes are better than one, right?. Instead, it just underscored the tedium of it all. One day someone walked in and found us BOTH in DEEP SLEEP in front of the monitor. Having two people didn't decrease the waste caused by mistakes, it just bored the hell out of more people.

mmmm2 · 4 years ago
Another good approach is do deletions slowly. Put sleeps between each operation, and log everything. That way if you realize something is broken, you have a chance of catching it before it's too late.
JadeNB · 4 years ago
> Then have multiple people inspect the logs.

I think that this is the most important part of any check. Your parent refers to checking the log five times, but, at least in my experience, I won't catch any more errors on the fifth time than the first—if I once saw what I expected rather than what was there, I'll keep doing so. Of course everyone has their blind spots, but, as in the famous Swiss-cheese approach, we just hope that they don't line up!

zeristor · 4 years ago
Yes, I love the idea of the Plan Apply.
water8 · 4 years ago
It never hurts to ask for another set of eyes to review. At the least if something goes awry, the blame isn't solely on you.
csours · 4 years ago
Make a plan, check the plan, [fix the plan, check the plan (loop)], do the plan

See PDCA for more a more time critical decision loop. https://en.wikipedia.org/wiki/PDCA

zrail · 4 years ago
Another technique that I've used with good success is to write a script that dumps out bash commands to delete files individually. I can visually inspect the file, analyze it with other tools, etc and then when I'm happy it's correct just "bash file_full_of_rms.sh" and be confident that it did the right thing.
francis-io · 4 years ago
This was taught to me in my first linux admin job.

I was running commands manually to interact with files and databases, but was quickly shown that even just writing all the commands out, one by one gives room personally review and get a peer review, and also helps with typos. I could ask a colleague "I'm about to run all these commands on the DB, do you see any problem with this?". It also reduces the blame if things go wrong if it managed to pass approval by two engineers.

While I'm thinking back, another little tip I was told was to always put a "#" in front of any command I paste into a terminal. This stops accidentally copying a carriage return and executing the command.

cruano · 4 years ago
That was our SOP for running DELETE SQL commands on production too, a script that generates a .sql that's run manually. It saved out asses a fair amount of times
hinkley · 4 years ago
I tend to write one script that emits a list of files, and another that takes a list of files as arguments.

It's simple to manually test corner cases, and then when everything is smooth I can just

    script1 | xargs script2
It's also handy if the process gets interrupted in the middle, because running script1 again generates a shorter list the second time, without having to generate the file again.

When I'm trying to get script1 right I can pipe it to a file, and cat the file to work out what the next sed or awk script needs to be.

KMnO4 · 4 years ago
Ah, I’m glad I’m not the only one who did this. It also means that you can fix things when they break halfway. Say you get an error when the script is processing entry 101 (perhaps it’s running files through ffmpeg). Just fix the error and delete the first 100 lines.
wildmanx · 4 years ago
The only issue with that is if subsequent lines implicitly assume that earlier ones executed as expected, e.g. without error.

Over-simplified example:

1. Copy stuff from A to B

2. Delete stuff from A

(Obviously you wouldn't do it like that, but just for illustration purposes.) It's all fine, but (2) assumes that (1) succeeded. If it didn't, maybe no space left, maybe missing permissions on B, whatnot, then (2) should not be executed. In this simple example you could tie them with `&&` or so (or just use an atomic move), but let's say these are many many commands and things are more complex.

XorNot · 4 years ago
At the point you're doing this, you should be using a proper programming language with better defined string handling semantics though. In every place it comes up you'll have access to Python and can call the unlink command directly and much more safely - plus a debugging environment which you can actually step through if you're unsure.
bambax · 4 years ago
Yes. Also, maybe not have a delete action in the middle of a script. It's usually better to build a list of items to be deleted. In that case, two lists: items to be deleted, items to be kept. Then compare the lists:

- make sure the sum of their lengths == number of total current items

- make sure items_to_be_kept.length != 0

- make sure no two items appear in both lists

- check some items chosen at random to see if they were sorted in the correct list

At this point the only possible mistake left is to confuse the lists and send the "to_be_kept" one to the delete script; a dry run of the delete list can be in order.

ectopod · 4 years ago
This. The original approach can fail horribly if there's a problem on the server when you run the script for real. Your code can be perfect but that's no guarantee the server will always return what it ought to.
pc86 · 4 years ago
I've had good success with this approach, have two distinct scripts generate the two lists, then in addition to your items here also checking that every item appears in one of the lists.
ufo · 4 years ago
What do you recommend, to not get intro trouble if there are spaces or newlines in the file names?
gilleain · 4 years ago
Yes, I find command line tools that have a "--dry-run" flag to be very helpful. If the tool (or script or whatever) is performing some destructive or expensive change, then having the ability to ask "what do you think I want to do?" is great.

It's like the difference between "do what I say" and "do what I mean"...

bzxcvbn · 4 years ago
That's what I like about powershell. Every script can include a "SupportsShouldProcess" [1] attribute. What this means is that you can pass two new arguments to you script, which have standardized names across the whole platform:

- -WhatIf to see what would happen if you run the script;

- -Confirm, which asks for confirmation before any potentially destructive action.

Moreover these arguments get passed down to any command you write in your script that support them. So you can write something like:

    [CmdletBinding(SupportsShouldProcess)]
    param ([Parameter()] [string] $FolderToBeDeleted)
    
    # I'm using bash-like aliases but these are really powershell cmdlets!
    echo "Deleting files in $FolderToBeDeleted"
    $files = @(ls $FolderToBeDeleted -rec -file)
    echo "Found $($files.Length) files"
    rm $files
If I call this script with -WhatIf, it will only display the list of files to be deleted without doing anything. If I call it with -Confirm, it will ask for confirmation before each file, with an option to abort, debug the script, or process the rest without confirming again.

I can also declare that my script is "High" impact with the "ConfirmImpact = High" switch. This will make it so that the user gets asked for confirmation without explicitly passing -Confirm. A user can set their $ConfirmPreference to High, Medium, Low, or None, to make sure they get asked for confirmation for any script that declare an impact at least as high as their preference.

[1]: https://docs.microsoft.com/en-us/powershell/scripting/learn/...

mmcclimon · 4 years ago
The rule we have is that anything that is not idempotent and not run as a matter of daily routine must dry-run by default, and not take action unless you pass --really. This has saved my bacon many times!
rjh29 · 4 years ago
Going further, make it dry run by default and have an --execute flag to actually run the commands: this encourages the user to check the dryrun output first.
FriedrichN · 4 years ago
All my tools that have a possible destructive outcome use either a interactive stdin prompt or a --live option. I like the idea of dry running by default.
kortex · 4 years ago
This is why I like to always write any sort of user-script batch-job tools (backfills, purges, scrapers) with a "porcelain and plumbing" approach: The first step generates a fully declarative manifest of files/uris/commands (usually just json) and the second step actually executes them. I've used a --dry-run flag to just output the manifest, but I just read some folks use a --live-run flag to enable, with dry-run being the default, and I like that much better so I'll be using that going forward.

This pattern has the added benefit that it makes it really easy to write unit tests, which is something often sorely lacking in these sorts of batch scripts. It also makes full automation down the line a breeze, since you have nice shearing layers between your components.

http://www.laputan.org/mud/mud.html#ShearingLayers

InfoSecErik · 4 years ago
I tend towards a --dry-run flag for creative actions and --confirm for destructive actions. Probably sightly annoying that the commands end up seemingly different, but it sure beats accidentally nuking something important.
mkr-hn · 4 years ago
This sounds like a "do nothing script."

https://news.ycombinator.com/item?id=29083367

It defaults to not doing anything so you can gradually and selectively have it do something.

Learned about when I posted my command line checklist tool on HN: https://github.com/givemefoxes/sneklist

(https://news.ycombinator.com/item?id=25811276)

You could use it to summon up a checklist of to-dos like "make sure the collection in the dictionary has the expected number of values" before a "do you want to proceed? Y/n"

mipmap04 · 4 years ago
I do this, too, but I also take a count of the expected number of items to be deleted as well. If my collection I'm iterating over doesn't have exactly that number of objects I expect, I don't proceed.
lifthrasiir · 4 years ago
Human-in-the-loop is so important concept in ops and yet everyone (that's including me) seems to learn it the hard way.
pc86 · 4 years ago
I just want to say as someone currently working on a script to delete approximately 3.2TB of a ~4TB production database, this subthread is pure gold.
rawgabbit · 4 years ago
To ensure that the files are actually are downloaded (step1), before deleting the original (step2). I would make make step1 an input to step2. That is step2 cannot work without step1. Something like:

    (step1) Download video from URL.  Include the Id in the filename.
    (step2) Grab the list of files that have been downloaded and parse to get the Id.  Using the Id, delete the original file.

veltas · 4 years ago
Yep, even writing a simple wildcard at command-line I will 'echo' before I 'rm'.
pjerem · 4 years ago
On computers I own, I always install "trash-cli" and i even created an alias for rm to trash. It's like rm, but it goes to the good old trash. It will not save your prod but it's pretty useful on your own computer at least.
mbiondi · 4 years ago
Agreed, I've also been burned doing stupid things like this and always print out the commands and check them before actually doing the commit.

As they say, measure twice, cut once.

Don't feel bad, I think every professional in IT goes through something similar at one time or another.

V__ · 4 years ago
This was my first thought too. Another think I like to do, is to limit the loop to say one page or 10 entries and check after each run that it was correctly executed. It makes it a half-automated task, but saves time in the long run.
hinkley · 4 years ago
Condensed to aphorism form:

    Decide, then act.  
There's a whole menagerie of failure modes that come from trying to make decisions and actions at the same time. This is but one of them.

Another of my favorites is egregious use of caching, because traversing a DAG can result in the same decision being made four or five times, and the 'obvious' solution is to just add caches and/or promises to fix the problem.

As near as I can tell, this dates back to a time when accumulating two copies of data into memory was considered a faux pas, and so we try to stream the data and work with it at the same time. We don't live there anymore, and because we don't live there anymore we are expected to handle bigger problems, like DAGs instead of lists or trees. These incremental solutions only work with streams and sometimes trees. They don't work with graphs.

Critically, if the reason you're creating duplicate work is because you're subconsciously trying to conserve memory by acting while traversing, then adding caches completely sabotages that goal (and a number of others). If you build the plan first, then executing it is effectively dynamic programming. Or as you've pointed out, you can just not execute it at all.

Plus the testing burden is so drastically reduced that I get super-frustrated having to have this conversation with people over and over again.

GordonS · 4 years ago
It's amazing the number of times I look at some simple code and think "nah, this is so simple it doesn't need a test!", add tests anyway (because I know I should)... and immediately find the test fails because of an issue that would have been difficult to diagnose in production.

Automated tests are awesome :)

Too · 4 years ago
A few assertions would have also stopped this.

    During buildup of the our_id list: assert (vimeoId not in our_ids). 
    After creating the list:  assert len(set(our_ids)) > 10000 and assert len(set(our_ids)) == len(our_ids)
    Before each final deletion: assert id not in hardcoded_list_of_golden_samples. 
    Depending on the speed required you could hit the api again here as an extra check. 
But as always everything is obvious in hindsight. Even with the checks above, Plan+Apply is the safest approach.

ineedasername · 4 years ago
>literally simple prints statements

Yes, that can be a simple but powerful live on screen log. I developed a library to use an API from a SaaS vendor, in much the same way as the author. It was my first such project & I learned the hard way (wasted time, luckily no data loss or corruption) that print() was an excellent way to keep tabs on progress. On more than one occasion it saved me when the results started scrolling by and I did an oh sh*t! as I rushed to kill the job.

aqme28 · 4 years ago
Rather than commenting it out, I suggest adding a --live-run flag to scripts and checking the output of --live-run=false (or omitted) before you run it "live."
sdevonoes · 4 years ago
But then you have double the chances of introducing a bug for the specific scenario we are talking about:

Before: there is chance there is a bug in my "delete" use case

Now: what we have before plus the change that there is a bug in my "--live-run" flag

ivanhoe · 4 years ago
Beside doing this, I like to first just move files to another dir (keeping the relative path) instead of deleting them. It's basically like a DIY recycle bin.

If both paths are on the same disk moving files is a fast operation - and if you discover a screw up, you can easily undo it. On the other hand if everything still looks fine after a few days, you just `rm -rf` that folder and purge the files.

inglor_cz · 4 years ago
Yeah, that is what I recommend too.

Instead of performing the dangerous action outright, just log a message to screen (or elsewhere) and watch what is happening.

Alternatively, or subsequently, chroot and try that stuff on some dummy data to see if it actually works.

sam0x17 · 4 years ago
Indeed. I would say that framework or even language-level support for putting things in "dry-run" mode is something sorely missed from many modern frameworks and languages, that old C libraries used to do.
jagged-chisel · 4 years ago
This is how I do it in compiled code. In shell, I print the destructive command for dry runs - no conditions around whether to print or not, I go back to remove echo and printf to actually run the commands.
hayd · 4 years ago
I'd make sure those include WARN or ERROR (I'd use logging to do that), that way you can grep for those. Spot checking might be difficult if the logs get long.
krono · 4 years ago
The No. 2 philosophy!

Make sure you got everything out and off before you pull up your pants, or else you better be prepared to deal with all the shit that might follow!

password4321 · 4 years ago

   SELECT COUNT(1) FROM table 
   -- UPDATE table SET col='val'
   WHERE 1=1

worble · 4 years ago

    BEGIN TRANSACTION 
    UPDATE table SET col='val' WHERE 1=1
    ROLLBACK

abrookewood · 4 years ago
100% on the logging and dry run.
thunderbong · 4 years ago
That is called experience.

Good decisions come from experience. Experience comes from making bad decisions.

dncornholio · 4 years ago
Dry run really is key here. Most automated tests wouldn't find this bug.
OrwellianTimes · 4 years ago
Experience is the best teacher™
qwertox · 4 years ago
Aaaahhh, the feeling you get when you notice that you fucked up. Everything gets quiet, body motion stops, cheeks get hot, heart starts to beat and sinks really low, "fuck, fuck, fuck, fuck, fuck, fuck, fuck, fuck, fuck, fucking shit". Pause. Wait. Think. "Backups, what do I have, how hard will it be to recover? What is lost?". Later you get up and walk in circles, fingers rolling the beard, building the plan in the head. Coffee gets made.
deltarholamda · 4 years ago
Pffft, it's not a real panic until you weigh the pros and cons of leaving the country with nothing but the clothes on your back and becoming a illegal immigrant shepherd in a nation with too many consonants in its name.

(Your description is so, so, spot on.)

CapmCrackaWaka · 4 years ago
The worst panic I've felt actually took me over the precipice into peaceful oblivion. I started simply saying to myself "oh well... It's just a job".
beardedetim · 4 years ago
Ah, the goat farmer fantasy that always seems to come _at the cusp_ of the solution.
gwerbret · 4 years ago
I had this experience when, years ago on my first day as group lead at $JOB, I was being shown a RAID 5 production server that held years of valuable, irreplaceable data (because there were no backups. Let me repeat that there were no backups). For some bizarre reason, I thought "oh cool, hot-swappable drives" and pulled one out of the rack. This naturally resulted in loud, persistent beeping from the machine, which everyone ignored on the assumption that the fellow who was just hired as the group lead knew what the f he was doing.

While I didn't know what I was doing, I did manage to get the beeping to stop, and had to come in at 5 a.m. the next day to restripe the drive I'd yanked out.

Did I mention there were no backups? When I was a little bit more seasoned on the job, I raised a polite but persistent issue with management of the need for durable backups. Although I kept at it for months, they thought about it, talked about it, and ultimately did nothing. A few months after I left, the entire array failed. Since the group's work relied on the irreplaceable data, all work ground to a halt for the several months it took for an off-site company to recover the data.

ycmjs · 4 years ago
My previous boss stores company data this same way. I begged him to approve the $5 per month cost for Backblaze on the computers I used. He approved it for some, but not all (about half of the ten computers). He completely rejected the idea for the company's data. After all, it was already protected by RAID.
ricardobeat · 4 years ago
Isn’t RAID 5 supposed to survive a single disk being taken out?
wonderwonder · 4 years ago
lol, its amazing how fast the blood leaves your face when your mind transitions from "cool that worked well" to "Oh no, what have I done?"

That backups comment sounds very familiar.

I accidentally deleted a clients products table from the production database in my early years as a solo dev. There was only a production database. Luckily I had written a feature to export the products to an excel sheet a while before and happened to have an excel copy from the prior day. I managed to build an export to ingest the excel and repopulate the table in record speed while waiting for my phone to ring and the client to be furious. Luckily they never found out.

Deleted Comment

AlwaysRock · 4 years ago
God the feeling of having your body temp rise based purely on realizing you fucked up is so relatable.
cntrl · 4 years ago
damn, your description is spot on and reading this triggered PTSD in me... Last time I had this feeling was two years ago when I destroyed one of our development servers because of a failed application update. I know exactly how I wished Ctrl + Z to exist in real life... We had backups of the machine, but it was still kind of a humiliating feeling to tell everybody and ask for restore from backup (everybody was cool though in the end)
sergiotapia · 4 years ago
I lost 1hr and 30 minutes of a Slack like app (chat messages). Luckily at the time we were pretty small so not much data was lost but holy shit did that make me almost throw up.

Thank God my automatic backups were so close to the mistake I made and I didn't lose 24 hours.

Haven't made a mistake like that since and I don't destroy DB records like that anymore.

Yhippa · 4 years ago
Don't forget that out-of-body experience where you just kinda float outside yourself.
mannykannot · 4 years ago
If it is for real, body motion does not exactly stop, it manifests itself in other ways.

Deleted Comment

Oarch · 4 years ago
Poetic! Love it
iamben · 4 years ago
I like these stories. I think they resonate well for 'the rest of us'. I've made plenty of mistakes like this - you learn and grow, right?

One of the best things about HN is that so many incredible, talented people post. It's incredibly inspiring to raise your own game, to see what the best are doing. But sometimes it's equally important to realise we all fuck up, and for every unicorn dev there's another thousand of us grinding away.

OP - well done for sorting the problem and telling us all about it!

rossdavidh · 4 years ago
Amen
muglug · 4 years ago
The root of this particular issue was Vimeo's failure to do this migration for their customers.

Vimeo OTT has a codebase written in Rails, whereas the main PHP application is written in PHP. At the time Vimeo acquired Vimeo OTT's codebase, the Vimeo OTT codebase was small — around 10,000 lines of Ruby. Rewriting that codebase inside the Vimeo PHP application would have been a tough technical challenge for the all-Ruby team, and they'd have likely lost some people along the way and missed out on some content deals, so they decided instead to maintain two separate codebases and two separate login systems.

The video-playback and video-storage infra has since been unified, but all the business logic is still siloed.

conductr · 4 years ago
He wasn’t asking them to refactor their internal code bases. But they should be able to whip up the 20 lines of code needed to do this between APIs (or just directly on their servers). Essentially what author was trying to do when he screwed up. For the author this was disposable code, for Vimeo this would have been a reusable utility.

I know how these things happen. Support ticket queues and all. And while I don’t fully know the difference in cost, I would assume a customer upgrading to an Enterprise plan would get a better support experience.

Whoever within authors company negotiated the upgrade to Enterprise (or didn’t) and failed to embed some agreement around OTT to Enterprise transition assistance was the one who made the first mistake.

chernevik · 4 years ago
Per the post, Vimeo DID do it -- without telling the customer! And then wouldn't help uncluster the situation.
macspoofing · 4 years ago
>The root of this particular issue was Vimeo's failure to do this migration for their customers.

Yes and No. At the end of the day, you as a business have to insulate yourself from your infrastructure provider.

notyourday · 4 years ago
Vimeo is the only infrastructure provider providing that service. It is impossible to insulate a business from it.
tomkwong · 4 years ago
First, I want to say that this is a great post. You always grow stronger when you make mistakes. Writing it up solidify understanding in the learning process.

This story resonates with many people here because many experienced engineers had done something similar before. For me, destructive batch operations like this would be two distinct steps:

1. Identify files that need to be deleted; 2. Loop through the list and delete them one by one.

These steps are decoupled so that the list can be validated. Each step can be tested independently. And the scripts are idempotent and can be reused.

Production operations are always risky. A good practice is to always prepare an execution plan with detailed steps, a validation plan, and a rollback plan. And, review the plan with peers before the operation.

notyourday · 4 years ago
> 1. Identify files that need to be deleted; 2. Loop through the list and delete them one by one.

> These steps are decoupled so that the list can be validated. Each step can be tested independently. And the scripts are idempotent and can be reused.

This is the most underrated comment.

I'm saying it as someone who had the ultimate oversight of deleting hundreds of TBs per day spread of billions of files on different clouds and local storage.

spiffytech · 4 years ago
I've never regretted treating tasks like this as a pipeline of discrete steps with explicit outputs and inputs. Sending output to a file, viewing it, then having something process the file is such a great safety net.
RankingMember · 4 years ago
I'm impressed you went with an automated solution (PlayWright) for 500 videos after all that, considering they could be cross-loaded from Google Drive almost instantaneously. I'm glad it worked, but coding around a screw-up under the gun seems like a high-risk operation compared to spending 4 hours doing the task manually (albeit being super bored the whole time), but with the benefit of knowing it's being done correctly instead of hurriedly writing a script to potentially do something else wrong very efficiently and dig your hole deeper.
bruhbruhbruh · 4 years ago
+1 to this. After the few major screw-ups I've caused at work, my self-confidence in my coding ability is rocked, and I tended to react by erring towards manual cleanup, rather than coding some scalable solution for fixing the issues
leokennis · 4 years ago
Actually I was surprised reading that the person wrote a script to delete 900 videos.

If you need to do it once, it’s probably 2-3 hours of work? That is identifying a duplicate video and then clicking the button(s) to delete it once every 20 seconds.

Reminds me of https://xkcd.com/1205/

rexreed · 4 years ago
A big part of the reason for the problem in this post is because Vimeo made it impossible to move videos from one Vimeo product to another Vimeo product: "There were roughly 500 videos on VimeoOTT that had to be transferred to Enterprise and Vimeo doesn't provide an easy way of doing it."

I have found working with Vimeo to be very frustrating, especially recently. They have a great video solution, especially for streaming, but they seem to put these unnecessary and frustrating roadblocks that make me constantly question my decision to use Vimeo. From in ability to move videos from one place to another, requiring complete uploads (resulting in problems like this post) to nonsensical limits and pricing, especially on their new webinar offering, which has a limit of 100 registered attendees. For anyone who has run webinars before, this makes no sense since 100 registered attendees usually means 20-30% of those people actually attend, so you're capped at 20-30 live attendees. They should price it like most event sites and charge per live attendance rather than registration.

Regardless, I've been very frustrated with Vimeo since it could be so much better if they didn't have these roadblocks in place. If they could have easily enabled moving videos from one product to another, the post (and 7TB of lost videos) would never have happened. It wasn't always this way with Vimeo, but they went IPO in May 2021 and it's no surprise they're turning the screws on their product offering and pricing now.

NikolaNovak · 4 years ago
Honestly, this is positively representative of any junior developer with comparable experience. Depending on their background and how much production work they had, there's an overwhelming sense of eagerness and enthusiasm. Quick to script and perhaps a bit too quick to execute.

A friendly team will harness that enthusiasm and tame the quickness / encourage respect for production. We all made a massive doo doo and its how you proceed that'll define your career.