When I was a kid in the 1980s I really wanted to draw comics for a living. I spent ages sat at the family dining room table pouring my heart into little comic strips like the ones in my favorite comic books. I'd try so hard to draw as well as the artists I admired but I could never quite get there. No matter how hard I tried I couldn't match the level of detail, clarity, and expression they did.
When I was older I learned that comic books are drawn on much bigger paper than the comic. Part of the production process is to shrink the drawings to comic book size. Drawing bigger comics is a lot easier. I was trying to draw the comic and mimic the rest of the production process as well.
The same is true for dev work. Constraining yourself to what the end user has is throwing away a ton of benefits from having access to better tools. By all means test your work on a similar machine but don't throw up artificial barriers to doing good work if you don't need to. You're just trying to go straight to the final version. That doesn't work.
Running the end result on end-user hardware is well understood in certain domains:
Mobile app developers will use their work on mobile phones and tablets.
Good web developers will test across different browsers and will use the tools to adjust window size to common resolutions. This is really easy to do and well supported by the software.
The bigger problem, IMO, is the trend of having UI/UX designers completely separated from developers. They end up making prototypes that look good in presentations and other abstract formats, which are then handed off to developers to make into something usable. Good designers will use the tools to visualize designs in target resolutions, but often they end up optimizing for what will look best in their portfolio or on a presentation instead.
Similar experience: when I was young I built a website for my brother, it looked great on my iMac when I showed him. He said everyone thought it looked weird that the red of the header and graphics didn't match the background.
On my iMac the jpgs were rendered correctly match the rgb hex of the html/css, but on windows the color space of the jpgs rendered them a more crimson hue. Had I tested on my brother's laptop it would have been immediately noticeable.
I'm with you here, however the author has a valid point. I've been using a 43" screen (4K x1) as my main and only for several years; switching to a laptop for a business trip threw me into a world of confusion, I kept instinctively reaching around to find the button or window corner to make the window larger - but it was already as large as it gets! I don't get to do a whole lot of UI/UX work, but I feel that this kind of experience can have a huge impact on how people think about addressing the needs of their users. I felt genuinely constrained, in a way that just playing with CSS breakpoints doesn't.
I think having a smaller screen on a secondary device for testing your UI/UX work is a very good idea - same as testing how your website/app works on 10 year old hardware, or over a 3G connection.
I've recently tried to open a Goolag table and do an online payment while the ISP limited connection to 80kb/s (like a dial-up). While I KIND OF understand the futility of opening web-stuff like spreadsheets, the second task of sending a whopping 30 bytes of credit card data failed dramatically. WHY THE HECK!?! Why the heck even such critical parts of Internet became web2.0 slow over-bloated AJAX-JSON-stuffed piles of garbage?! Smoothie-drinking chilled-openspace web-cod...desig..makers? don't realize that their services JUST MIGHT happen to get accessed by people in potentially disastrous situations? Like the world never seen earhquakes, floods and wars? Or just rains. Yeah, rains that block your 4G and wet your stupid touchscreens.
On the other hand, you have to get a feel for how the web is like for your users.
Maybe your website runs well on a dev machine with a fibre internet connection and an ad blocker on DEFCON 1, but what about your users on budget laptops with crappy screens? Or what about people with crappy wi-fi?
I use a 4k Windows machine for development, recently started using a Macbook Air 13" for browsing and also testing my own websites, it works great, not only for testing different screen sizes but also for getting some insight into what people using a different OS/Browser/Device feel when navigating my apps. I tried online device testing/simulation, but it's not the same as physically using the device yourself on a daily basis.
Doesn't make any sense, this would be like an audio engineer getting rid of their studio monitors and mixing on AirPods because that's what people listen on.
What they do instead is use their multi-thousand dollar speaker and room setup for mixing (because it is better!) but then check the end result on airpods, car stereos, etc.
Just test on the resolutions you care about, no need to cripple your development setup full time.
this is the correct take for Netflix and Christopher Nolan. The answer is they used to.
I read a forum where an engineer was working on a TV show would flip the show in his living room, bedroom, guest bedroom. Walking around the house making sure the sound was balanced everywhere, driving his wife insane.
Not sure what his deal is. 'Nolan also admitted in a 2017 interview with IndieWire that his team decided “a couple of films ago that we weren’t going to mix films for substandard theaters,” adding, “We’re mixing for well-aligned, great theaters.”' source -> https://www.indiewire.com/2020/09/tenet-sound-mixing-backlas...
It's madness. Idk what Netflix's excuse is. Their spec sheet is 2600+ words. But I think the issue is this line "5.1 audio is required and 2.0 is optional." My best guess if you have 5 speakers and subwoofer it's fine. But if you're on cheap headphones or a laptop good luck.
Realistically, they'd need two mixes. One for people who care about audio quality and have invested in a proper home theatre and another stereo mix for the large majority of movie enjoyers.
Any proper mix engineer will do exactly this. They will mix on studio monitors but also shitty laptop speakers, bookshelf speakers, mono Bluetooth speakers etc. You have to design for the medium, and most people have shitty displays. Might look great on a retina screen and have no contrast at all on a $50 LG.
Sure shitty laptop speakers are how most people listen to music, but they each listen on different shitty laptop speakers with completely different sound characteristics. This would be like a cobbler wearing the shoes he makes for his customers but only ever trying on the size 9 shoes, because they are the most common, while having a size 11 foot.
>Doesn't make any sense, this would be like an audio engineer getting rid of their studio monitors and mixing on AirPods because that's what people listen on.
Skrillex, an EDM producer who became popular about a decade ago, did approximately this. They were iPod headphones back then, not AirPods. Perhaps designing according to the vast majority of users is effective enough?
He took a music genre that you couldn't really listen to except you have semi professional equipment and modified the basses to make them 'hearable' for the masses on cheap equipment or even loud on their phones.
I kinda can see that this worked for him. Not sure if that transports well to other producers.
The trouble with that specifically for audio, if you never test it on anything else you might miss things like crackles that only happen on speakers capable of producing crackles in that frequency range
He specifically talked about an audio engineer. Yes as a composer/producer you can get by without audio monitoring hardware if you have an audio engineer taking care of the mixing/mastering after you. Those are different profiles.
There is a caveat that a studio may have a cheap set of speakers around to ensure everything sounds acceptable on consumer equipment. But yes that is in addition to expensive monitors, and maybe a subwoofer the size of a minifridge.
I mean you think it's dumb but I honestly think that mixing with airpods would be the better option.
Same with icons where a high resolution pixel graphic can look horrible when scaled down to icon level vs an icon drawn in the resolution it would be shown at from the get go.
It reflects the problem with software development today as well.
Most of the time developers are using top end machines, massive displays with fast internet and build bloated things that work nice and 'fast enough' for them, but once they run on low/mid level laptop/phones are unusable.
In the 90s programmers didn't have that luxury, all hardware was the same so what they were programming on was what the end user would be on and since hardware was limited anyway every bit and cycle counted. That's why you get games like doom/quake that will run on 75mhz and 8mb of ram and be buttery smooth, whereas today something like flappy bird needs 512mb minimum!
This whole comment section is making me think some people aren't being very diligent with their testing.
A professional mix absolutely requires testing across multiple grades of devices. Whether or not someone soley produces on airpods is reslly personal preference, so long as they verify their mix with many types of gear before shipping. A good development shop should be testing across screen sizes and performance profiles for the same reasons.
>In the 90s programmers didn't have that luxury, all hardware was the same
I mostly agree with you. However, the 90s was actually a period where computer hardware became rapidly obsolescent. The 486 to Pentium transition, for example, meant that lots of people had PCs far less powerful than the PCs used by games developers. Quake is a case in point. Quake came out in 1996 and would not have run on a typical PC bought only two or three years earlier. A few years after that, many games would only run acceptably on PCs with 3D graphics cards (which were by no means universal).
Its funny. I have programmed for a decade and at no point I have ever found that an external monitor was the solution to my problems. My coworkers think I'm crazy, but I do all development on a 15' macbook. My path to efficiency and ergonomic work has always been to become more familiar with the keyboard shortcuts for navigating windows and tabs. Along with Spectacle, Vimium, and tree-style tabs, I have never wanted for a second monitor. Mostly I find it annoying because I forget where I put my applications if I have two monitors.
My overall mantra at work is that I can only do one thing a a time if I want to do it well. I feel in a sense limiting myself to one viewport helps reinforce this behavior.
I've used 13" laptops (MacBook Air, Chromebook) for years. I even use my 11.9" iPad when traveling. I mostly use a terminal app ssh'd to a remote server, screen and vim, and switch to a browser window. I prefer to keep my focus on one task in one full-screen window. I find that works best to keep my attention on one thing. Multiple windows, to say nothing of multiple monitors, just create distractions. For me programming and system admin mostly happen in my head, not in a multitude of windows and panes on the screen.
Back in 2005-2008 I worked at a software company that gave developers two large monitors by default, and some people had three -- a status thing for the more senior people. I noticed many of the second displays had Facebook or Twitter on them all day. I don't use social media. I only used one monitor at that job and when I started freelancing I used a laptop and that's worked fine ever since.
I'm old and have worked with screens and keyboards for 40 years, starting with dumb terminals, and never experienced any posture or RSI problems. I know some people do, and they tell me how unhealthy my setup is (no laptop stand or external keyboard). "Ergonomics" is probably more of a personal thing than a hard science.
No, you're just the equivalent of someone who has smoked a pack a day their entire life but doesn't have cancer insisting that that lung cancer business is overblown and unscientific. Even a high risk isn't a guarantee.
External monitor does not necessarily mean multi monitor. I have an external monitor that is big, and has good contrast and refresh rate, but I close the laptop's lid.
Multiple monitors are ergonomically worse than multiple virtual workspaces IMHO - if you can set up keyboard shortcuts instead of mouse gestures.
I am 100% aboard the single screen, single fullscreen app, almost exclusively keyboard work with lots of x-tabbing.
I really like having a second screen though. I keep my laptop open on the side and use it to display "read only" stuff like a build status, consoles, etc.
For a while I used a single HD 32inch monitor and loved it. I avoided complex windows all over the place. But could lean back in my chair and see everything from a distance.
This is a bit of an odd take. Do you think that because you can manage one screen well that you don't need a second screen? Or do you think other people only need a second screen because they can't manage a single one?
More screen real-estate is just that. Effective management of two monitors might be a little extra work but it's worth it.
> My path to efficiency and ergonomic work has always been to become more familiar with the keyboard shortcuts for navigating windows and tabs.
> Mostly I find it annoying because I forget where I put my applications if I have two monitors.
Do you think you could have put this effort into coming up with a workflow that took advantage of multiple monitors? I didn't use multiple monitors for years, but eventually I tried it out and now it would be hard for me to go back, so I've always considered it to have a small learning curve to figure out how to "make use" of the extra space properly. I feel like being willing to use more keyboard shortcuts for windows and tabs would be an excellent way to get more efficient use out of extra screen space.
That's so interesting. I'm the opposite, but I've worked with plenty of people who work like you do. Both styles are pretty common.
My overall mantra at work is that I can
only do one thing a a time if I want to do it well
I'm curious. What about tasks that are "one thing" but may involve >1 windows/apps? For example, referring to documentation while looking at code? Or tweaking CSS/markup while observing the resulting changes in the browser? (If you do that sort of work)
Yeah, I've had 3 monitors forever, and for a while I tried 4 [1].
I do webinars where obviously one monitor is displayed. I find it clumsy - switching between the code, the program, log, browser, docs and so on.
In general work I focus on one task, but I find that many programs are involved at one time. Docs on one, code on another, program on a third and so on.
I also have a need for email to be open, along with Skype etc. Those get hidden often though, hence my need for the 4th.
[1] my experiment with 4 failed because the horizontal spread was too far, and it was tiresome to swivel to the 4th.i considered putting the 4th above the 3, but felt that too might be "out of eyeline". So for now I'm maxed on 3.
Don’t you find looking down at a laptop painful after a while? At a desk I’m okay working on a smaller screen but if I don’t have an external monitor I need to raise it on a few books and use a separate keyboard if I’m going to spend 8 hours a day sitting like that.
I would consider it crazy to not have an external monitor. I always use one.
But that's not because I think it's important to have two monitors. I only use the external monitor; the laptop's built-in screen would only see any use while traveling.
I'm pretty sure that those people use professional laptop stands
which elevate the laptop monitor to ergonomic height in conjunction with an external keyboard
But 'ergonomic work' has a lot to do with posture as dictated by screen positioning, not just size. With OP's setup, the 180 degree thinkpad hinge and external keyboard allow him to sit with relatively good form. I used to have a hotdesking setup that similar [0].
How do you do that with a MacBook, where the screen only bends so far? Do you also use an external keyboard?
Me too, I just use my macbook. I keep 2 buffers side by side in VSCode, and cmd-tab or cmd-` to switch to other apps/windows.
I genuinely suffer from imposter syndrome when I see my colleagues with several huge screens in all sorts of configurations. I guess I'm not a real hacker.
I can only look at one screen at a time anyway, so it's as fast to switch apps than to turn my head to look at a different screen. Plus if I want to move the cursor, I need to switch app anyway (or do a long trip with the mouse and click somewhere for focus which is even worse).
If anything, not spending the time to move your neck but rather manipulate keys to manifest buffers in front of your eyes is the real hallmark of being a hacker.
There's a great MacOS app (AltTab) that replicates Windows' alt+tab window switching. It's fantastic, I highly recommend if you are as a big keyboard window switcher as you sound.
I started using Apptivate a couple of months ago, someone on here recommended it. It makes app switching very pleasingly fast. I use a PC kbd with a mac mini :-) and have F5-F9 assigned to instantly switch to the 5 apps I always have open.
Also, holding down F5-F9 shows those windows as long as they're held down. So now, to take a peek at the bash window I hold down F5 as long as I need to look. Can also type or drag things onto there while holding F5 with the other hand.
You can assign any app to any key or combination of keys. It's free. It can do other stuff too. Very highly recommended.
Also, F1-F3 I have assigned in the system settings to F1=Show desktop, F2=current app windows, F3=show all open windows (mission control). So I very rarely have to use Cmd-Tab any more!
I get it that you work on one thing at a time and you use programs with keyboard shortcuts like spectacle and vimium etc. But, still I can't help but wonder how does your IDE look? I am guessing you have a code editor (generally with a left/right/bottom panes with file explorer, repl, terminal, other tools etc), add a browser with a documentation page open and your real estate is too little and involves a lot of switching / scrolling, very easily due to all the keyboard shortcuts but its switching/scrolling none the less. Maybe you detach all these into their own windows and use different workspaces in your tiling manager, but that's again a lot of switching to even use a repl.
I am happy that your setup works for you, but a bigger monitor and/or maybe even one extra monitor will just help reduce a lot of that switching/scrolling.
My IDE has basically just the code editor yes. I don't need the project Explorer. Zero useful information at most times so why open it? It's not often I need to know where something is. I open files by name via shortcut. Yes our naming is that good.
I don't need a browser with documentation open at all times. I only need to look at it IFF I need to look something up and if so I switch to it and it's full screen. Copy what I need. Done. No need to flip my head over to the place on some huge screen where "documentation lives". Also I almost never need docs as I have auto complete in the IDE.
When I debug half the screen has the debugger open but only when I debug.
I also know how to get to the place I need to with shortcuts at all times. When I see people use this weird "show all windows in small at the same times" feature used by people I die a little inside. Of course it's gonna take them ages to then find the right one. I alt/cmd double tab to the right window faster than the animation to show the windows would be done. I know that my documentation is two alt tabs away and with one alt/cmd tab I'm back in my IDE. I know the terminal is one alt/cmd tab away. I don't understand people that use small terminal windows integrated into an IDE. Small and unusable. I have a terminal window alt/cmd tab-able to at all times and I know which tab inside that window is for what kind of work e.g. logs tailed on tab 2. When I have it open I can 100 focus on it. No distractions.
As a human you can only focus your eyes at one single thing at a time.
Whether you have to move your head/eyes or press a key to switch between the documentation and the IDE does not make much difference in term of efficiency.
I actually find it more comfortable, ergonomic and fast to have a single screen and switch the workspace. That also means that my (physical) setup is minimal and easy to manage, and that I do not get unnecessary distractions or things moving (notifications, messages, ads, animations...) in my peripheral vision.
It's on the macbook flair, which is a foldable cell phone with a keyboard. Developers have learned about the power and low cost for this machine and have started to use it as a laptop.
The multiple folds of the screen and keyboard are what allow the flair's 15 foot footprint to be carried around in a phone's form factor.
I have one of the original 16 core versions of these. Come to find out, Apple has recently released 80 and 128 core versions of the flair, so I may need to upgrade next year for the additional horsepower to run data models.
I use shortcuts extensively too. And sometimes work exclusively on my laptop.
That said, it's helpful to be able to use my project while seeing all the logs involved. And that gets extremely claustrophobic on a laptop. Especially if you have an application that is heavy on both the client and server side - chrome, chrome devtools, re-frame-10x sidebar debugger, server side logs/debugger.
It gets even worse when I'm working on one project, which is a client/server game using Unity.
> Mostly I find it annoying because I forget where I put my applications if I have two monitors.
Your applications should always be in the same place. You can still use keyboard shortcuts too.
Beyond that, because when I first got a 30" monitor it was so much real estate and it took so long to move my mouse between windows, I wrote a little program that let me set and restore mouse positions with global hotkeys. It will also bring whichever window is under the mouse position to the front.
So I can see all logs at once as I'm interacting with the project. Including the debugger. Everything always goes in the same spot. And if I want to interact with one of those windows, I can use a hotkey which will instantly set my mouse cursor to a known good position within that window, which enables me to interact with even a GUI in a reproducible fashion rather than having to slow boat my cursor over to it.
> Beyond that, because when I first got a 30" monitor it was so much real estate and it took so long to move my mouse between windows, I wrote a little program that let me set and restore mouse positions with global hotkeys. It will also bring whichever window is under the mouse position to the front.
On Windows, the focus-follows-mouse feature does both these things. Along with raising and focusing whatever you mouseover, it also moves the mouse to any window you alt-tab too. Sadly, it's been tuned weirdly in new versions of windows so it isn't very useful now.
I don’t mind coding on a 12 or so inch laptop screen. But editing a LaTeX file is kind of a pain, because it is nice to have the code and the document side by side, but half a screen is quite small. I guess people working on websites must have similar issues?
Personally I don't mind seeing only the LaTeX code and looking at the pdf once in a while after compiling. But a second screen to keep open and accessible background papers or notes would be very convenient
I mostly agree. I enjoy my 40” 4k monitors, but I’ve found that sometimes constraining myself to a laptop display can improve my focus. However, laptop keyboards, even those on macs and thinkpads, are poorly suited for extended use I think. The author is using an external keyboard, and that makes the setup much more usable in my opinion.
I'm using 4 monitors, two 33" and two vertical 22".
It has nothing to do with knowing how to do shortcuts, but having to do them at all. There are plenty of instances where you may only need to provide input to one window at a time, but see other things, like documentation, output, logs, or a browser window of the page you are editing. Having multiple monitors allows me to greatly reduce the need to switch between windows to reference something or see results of my inputs.
I personally find that if I'm forced to alt-tab between my IDE and documentation, like when I'm working on a laptop, it's incredibly distracting, breaks my flow, and slows down my work significantly.
I only use one monitor and use virtual desktops and side-to-side windows for context switching. But it's the external monitor if it's available, it is more ergonomic to work on a big screen, at the optimal distance and height from the eyes, and with the neck and spine properly aligned rather than curving down.
I use only 50% of my MacBook screen for actually writing code, and in fact I bet I could use less than that. I could SSH from my phone to a dev server and just work off a Bluetooth keyboard and the Terminus app. The advantages of being a powerful vimmer.
Interestingly enough with a Macbook I always only used one external monitor instead of 2 as I am used to. IMO the controls were kinda weird (for me) so it felt less painful just sticking to one.
For me, it's not so much the screen size as the ergonomics. Haviong the screen at the correct height to enable good posture. Separate mouse an keyboard for comfort.
how do you deal with 4-up situations - where you need the comp or design doc, the code, the test window, and the debugger / dev tools, all available for you to glance back and forth between?
This article is a good reminder that there is no such thing as one-size-fits-all. His setup would be horrible for me. I currently have three 1920x1080 monitors in an L configuration (two in the middle, one to the right) and it works great for me- because I need to see a lot of windows at once. I tell everyone I work with "you need at least two monitors" because for our work, it's true. Clearly a different case with OP.
I don't like having two monitors side by side because I find myself looking not-straight for too many hours of the day at my 'primary' monitor. I too used to use an _| shape with one in the centre and one tall one on the side.
Now I use a 43" 4k monitor and use most of the middle 2/3rds. I would use an extra wide monitor but they're made excessively wide & short. When playing games I can use 2560x1440 @59 Hz and the monitor knows to do 1:1 pixels so it's only using the centre of the display. I'm not so hardcore I need higher refresh rates--though the buttery smooth scrolling on new Macbooks is nice.
Very generally, what is your work? If some part of it is "monitoring", maybe some metrics or live orders of some kind then having it permanently on another monitor is a no brainer.
I generally do all my work on one screen and keep email / teams / Spotify on the laptop screen.
I work in the high end managed WordPress industry as Customer Success Manager. At any moment I have Slack, Tickets, Live Chat, Shell to several servers, and my server monitoring system up. ADHD means "out of sight out of mind" and so it's all visible all the time.
I tried this as well for about 8 months and ended up going back to an external monitor setup. Why? Health reasons. The ergonomics of using a laptop everyday is detrimental to your posture. Being hunched over and looking down at a laptop all day is not healthy. You can still have bad posture with an external monitor but now these risks are avoidable if you are disciplined enough. With a laptop you really have no choice but to have poor posture.
Small note, depending on location and situation. If you're in a work situation, not using an external monitor could cause issues for you employer, even if you're working from home. Some countries, like Denmark, have laws that require employees to be given, and use, external monitors, keyboards and pointing devices. For most people being hunched over a laptop for eight hours a day will result in neck, back, and/or wrist pain. Your employer could be fined if you do not use an external monitor.
A former coworker of mine had to be repeatedly remind to use the provided equipment. In the end it was agree that everything would sit on his desk, ready to use, in case of a visit from the government office of workplace safety.
Yes and no. It's on a stand, but it doesn't seem to fulfill the requirements for adjustability. It's not height adjustable and the tilt may be insufficient, at least one way, but excellent the other.
Multiple monitors are a force multiplier to Alt+Tab.
Yes, I can compile a formal business document from disparate programs and formats all on one screen and there is a bit of an advantage - mouse movement - but that's trivial compared to spatial awareness. Getting something from Excel into PowerPoint while checking Outlook incoming stuff and shooting the cow patties on Teams is a different modus operandi. That's why I relate what I do to being more like an F1 machine than a long-haul key puncher. When I'm on, I'm burning gas and hauling ass.
When I'm off, I'm decompressing in ways that are not exactly customary.
> Hell, some of these users were being shown the tablet-based view of the applications since our breakpoints were so ridiculously large. Yikes.
I used a 1366x768 monitor on my desktop for a couple of years and this was a huge annoyance - almost every single site assumed that i was on a tablet. Firefox allowed me to "zoom out" the sites which helped somewhat but then pretty much everything was a combination of tiny letters surrounded by tons of empty space.
Worse, i remember checking various stat sites at the time and 1366x768 was the most common non-mobile resolution!
It also affects people on much larger displays which just want a narrower browser window. So many sites break in random ways (like scrolling not working) or hide important things like the login form when the window isn't wide enough. Clearly many sites are never tested for that.
So many years of talking about responsive web frameworks and half the sites simply don't care.
When I was older I learned that comic books are drawn on much bigger paper than the comic. Part of the production process is to shrink the drawings to comic book size. Drawing bigger comics is a lot easier. I was trying to draw the comic and mimic the rest of the production process as well.
The same is true for dev work. Constraining yourself to what the end user has is throwing away a ton of benefits from having access to better tools. By all means test your work on a similar machine but don't throw up artificial barriers to doing good work if you don't need to. You're just trying to go straight to the final version. That doesn't work.
Running the end result on end-user hardware is well understood in certain domains:
Mobile app developers will use their work on mobile phones and tablets.
Good web developers will test across different browsers and will use the tools to adjust window size to common resolutions. This is really easy to do and well supported by the software.
The bigger problem, IMO, is the trend of having UI/UX designers completely separated from developers. They end up making prototypes that look good in presentations and other abstract formats, which are then handed off to developers to make into something usable. Good designers will use the tools to visualize designs in target resolutions, but often they end up optimizing for what will look best in their portfolio or on a presentation instead.
On my iMac the jpgs were rendered correctly match the rgb hex of the html/css, but on windows the color space of the jpgs rendered them a more crimson hue. Had I tested on my brother's laptop it would have been immediately noticeable.
I think having a smaller screen on a secondary device for testing your UI/UX work is a very good idea - same as testing how your website/app works on 10 year old hardware, or over a 3G connection.
I've recently tried to open a Goolag table and do an online payment while the ISP limited connection to 80kb/s (like a dial-up). While I KIND OF understand the futility of opening web-stuff like spreadsheets, the second task of sending a whopping 30 bytes of credit card data failed dramatically. WHY THE HECK!?! Why the heck even such critical parts of Internet became web2.0 slow over-bloated AJAX-JSON-stuffed piles of garbage?! Smoothie-drinking chilled-openspace web-cod...desig..makers? don't realize that their services JUST MIGHT happen to get accessed by people in potentially disastrous situations? Like the world never seen earhquakes, floods and wars? Or just rains. Yeah, rains that block your 4G and wet your stupid touchscreens.
Maybe your website runs well on a dev machine with a fibre internet connection and an ad blocker on DEFCON 1, but what about your users on budget laptops with crappy screens? Or what about people with crappy wi-fi?
What they do instead is use their multi-thousand dollar speaker and room setup for mixing (because it is better!) but then check the end result on airpods, car stereos, etc.
Just test on the resolutions you care about, no need to cripple your development setup full time.
I read a forum where an engineer was working on a TV show would flip the show in his living room, bedroom, guest bedroom. Walking around the house making sure the sound was balanced everywhere, driving his wife insane.
It's not engineers making that decision it's Nolan overreaching "If you see it particularly in an IMAX theater, projected, it’s pretty remarkable". source -> https://www.indiewire.com/2020/11/christopher-nolan-director...
Not sure what his deal is. 'Nolan also admitted in a 2017 interview with IndieWire that his team decided “a couple of films ago that we weren’t going to mix films for substandard theaters,” adding, “We’re mixing for well-aligned, great theaters.”' source -> https://www.indiewire.com/2020/09/tenet-sound-mixing-backlas...
It's madness. Idk what Netflix's excuse is. Their spec sheet is 2600+ words. But I think the issue is this line "5.1 audio is required and 2.0 is optional." My best guess if you have 5 speakers and subwoofer it's fine. But if you're on cheap headphones or a laptop good luck.
https://youtu.be/VYJtb2YXae8
Deleted Comment
Skrillex, an EDM producer who became popular about a decade ago, did approximately this. They were iPod headphones back then, not AirPods. Perhaps designing according to the vast majority of users is effective enough?
I kinda can see that this worked for him. Not sure if that transports well to other producers.
Deleted Comment
And when it sounds like shit using airpods ship it anyways?
Same with icons where a high resolution pixel graphic can look horrible when scaled down to icon level vs an icon drawn in the resolution it would be shown at from the get go.
It reflects the problem with software development today as well.
Most of the time developers are using top end machines, massive displays with fast internet and build bloated things that work nice and 'fast enough' for them, but once they run on low/mid level laptop/phones are unusable.
In the 90s programmers didn't have that luxury, all hardware was the same so what they were programming on was what the end user would be on and since hardware was limited anyway every bit and cycle counted. That's why you get games like doom/quake that will run on 75mhz and 8mb of ram and be buttery smooth, whereas today something like flappy bird needs 512mb minimum!
A professional mix absolutely requires testing across multiple grades of devices. Whether or not someone soley produces on airpods is reslly personal preference, so long as they verify their mix with many types of gear before shipping. A good development shop should be testing across screen sizes and performance profiles for the same reasons.
I mostly agree with you. However, the 90s was actually a period where computer hardware became rapidly obsolescent. The 486 to Pentium transition, for example, meant that lots of people had PCs far less powerful than the PCs used by games developers. Quake is a case in point. Quake came out in 1996 and would not have run on a typical PC bought only two or three years earlier. A few years after that, many games would only run acceptably on PCs with 3D graphics cards (which were by no means universal).
My overall mantra at work is that I can only do one thing a a time if I want to do it well. I feel in a sense limiting myself to one viewport helps reinforce this behavior.
Back in 2005-2008 I worked at a software company that gave developers two large monitors by default, and some people had three -- a status thing for the more senior people. I noticed many of the second displays had Facebook or Twitter on them all day. I don't use social media. I only used one monitor at that job and when I started freelancing I used a laptop and that's worked fine ever since.
I'm old and have worked with screens and keyboards for 40 years, starting with dumb terminals, and never experienced any posture or RSI problems. I know some people do, and they tell me how unhealthy my setup is (no laptop stand or external keyboard). "Ergonomics" is probably more of a personal thing than a hard science.
Multiple monitors are ergonomically worse than multiple virtual workspaces IMHO - if you can set up keyboard shortcuts instead of mouse gestures.
I really like having a second screen though. I keep my laptop open on the side and use it to display "read only" stuff like a build status, consoles, etc.
More screen real-estate is just that. Effective management of two monitors might be a little extra work but it's worth it.
> Mostly I find it annoying because I forget where I put my applications if I have two monitors.
Do you think you could have put this effort into coming up with a workflow that took advantage of multiple monitors? I didn't use multiple monitors for years, but eventually I tried it out and now it would be hard for me to go back, so I've always considered it to have a small learning curve to figure out how to "make use" of the extra space properly. I feel like being willing to use more keyboard shortcuts for windows and tabs would be an excellent way to get more efficient use out of extra screen space.
I do webinars where obviously one monitor is displayed. I find it clumsy - switching between the code, the program, log, browser, docs and so on.
In general work I focus on one task, but I find that many programs are involved at one time. Docs on one, code on another, program on a third and so on.
I also have a need for email to be open, along with Skype etc. Those get hidden often though, hence my need for the 4th.
[1] my experiment with 4 failed because the horizontal spread was too far, and it was tiresome to swivel to the 4th.i considered putting the 4th above the 3, but felt that too might be "out of eyeline". So for now I'm maxed on 3.
But that's not because I think it's important to have two monitors. I only use the external monitor; the laptop's built-in screen would only see any use while traveling.
How do you do that with a MacBook, where the screen only bends so far? Do you also use an external keyboard?
[0]: http://i.imgur.com/3R51QXE.jpg
I genuinely suffer from imposter syndrome when I see my colleagues with several huge screens in all sorts of configurations. I guess I'm not a real hacker.
I can only look at one screen at a time anyway, so it's as fast to switch apps than to turn my head to look at a different screen. Plus if I want to move the cursor, I need to switch app anyway (or do a long trip with the mouse and click somewhere for focus which is even worse).
Also, holding down F5-F9 shows those windows as long as they're held down. So now, to take a peek at the bash window I hold down F5 as long as I need to look. Can also type or drag things onto there while holding F5 with the other hand.
You can assign any app to any key or combination of keys. It's free. It can do other stuff too. Very highly recommended.
Also, F1-F3 I have assigned in the system settings to F1=Show desktop, F2=current app windows, F3=show all open windows (mission control). So I very rarely have to use Cmd-Tab any more!
http://www.apptivateapp.com/
I am happy that your setup works for you, but a bigger monitor and/or maybe even one extra monitor will just help reduce a lot of that switching/scrolling.
My IDE has basically just the code editor yes. I don't need the project Explorer. Zero useful information at most times so why open it? It's not often I need to know where something is. I open files by name via shortcut. Yes our naming is that good.
I don't need a browser with documentation open at all times. I only need to look at it IFF I need to look something up and if so I switch to it and it's full screen. Copy what I need. Done. No need to flip my head over to the place on some huge screen where "documentation lives". Also I almost never need docs as I have auto complete in the IDE.
When I debug half the screen has the debugger open but only when I debug.
I also know how to get to the place I need to with shortcuts at all times. When I see people use this weird "show all windows in small at the same times" feature used by people I die a little inside. Of course it's gonna take them ages to then find the right one. I alt/cmd double tab to the right window faster than the animation to show the windows would be done. I know that my documentation is two alt tabs away and with one alt/cmd tab I'm back in my IDE. I know the terminal is one alt/cmd tab away. I don't understand people that use small terminal windows integrated into an IDE. Small and unusable. I have a terminal window alt/cmd tab-able to at all times and I know which tab inside that window is for what kind of work e.g. logs tailed on tab 2. When I have it open I can 100 focus on it. No distractions.
Whether you have to move your head/eyes or press a key to switch between the documentation and the IDE does not make much difference in term of efficiency.
I actually find it more comfortable, ergonomic and fast to have a single screen and switch the workspace. That also means that my (physical) setup is minimal and easy to manage, and that I do not get unnecessary distractions or things moving (notifications, messages, ads, animations...) in my peripheral vision.
The multiple folds of the screen and keyboard are what allow the flair's 15 foot footprint to be carried around in a phone's form factor.
I have one of the original 16 core versions of these. Come to find out, Apple has recently released 80 and 128 core versions of the flair, so I may need to upgrade next year for the additional horsepower to run data models.
That said, it's helpful to be able to use my project while seeing all the logs involved. And that gets extremely claustrophobic on a laptop. Especially if you have an application that is heavy on both the client and server side - chrome, chrome devtools, re-frame-10x sidebar debugger, server side logs/debugger.
It gets even worse when I'm working on one project, which is a client/server game using Unity.
> Mostly I find it annoying because I forget where I put my applications if I have two monitors.
Your applications should always be in the same place. You can still use keyboard shortcuts too.
Beyond that, because when I first got a 30" monitor it was so much real estate and it took so long to move my mouse between windows, I wrote a little program that let me set and restore mouse positions with global hotkeys. It will also bring whichever window is under the mouse position to the front.
So I can see all logs at once as I'm interacting with the project. Including the debugger. Everything always goes in the same spot. And if I want to interact with one of those windows, I can use a hotkey which will instantly set my mouse cursor to a known good position within that window, which enables me to interact with even a GUI in a reproducible fashion rather than having to slow boat my cursor over to it.
On Windows, the focus-follows-mouse feature does both these things. Along with raising and focusing whatever you mouseover, it also moves the mouse to any window you alt-tab too. Sadly, it's been tuned weirdly in new versions of windows so it isn't very useful now.
It has nothing to do with knowing how to do shortcuts, but having to do them at all. There are plenty of instances where you may only need to provide input to one window at a time, but see other things, like documentation, output, logs, or a browser window of the page you are editing. Having multiple monitors allows me to greatly reduce the need to switch between windows to reference something or see results of my inputs.
I personally find that if I'm forced to alt-tab between my IDE and documentation, like when I'm working on a laptop, it's incredibly distracting, breaks my flow, and slows down my work significantly.
If your plan cannot be executed by Grug's 120-character-wide text editor, it is too complex and should be broken into smaller steps.
Now I use a 43" 4k monitor and use most of the middle 2/3rds. I would use an extra wide monitor but they're made excessively wide & short. When playing games I can use 2560x1440 @59 Hz and the monitor knows to do 1:1 pixels so it's only using the centre of the display. I'm not so hardcore I need higher refresh rates--though the buttery smooth scrolling on new Macbooks is nice.
I generally do all my work on one screen and keep email / teams / Spotify on the laptop screen.
A former coworker of mine had to be repeatedly remind to use the provided equipment. In the end it was agree that everything would sit on his desk, ready to use, in case of a visit from the government office of workplace safety.
[0] https://mescforyou.ksc.nasa.gov/Content/RehabWorks/Tech%20Ne...
(edit) add link
You can also get it via bad posture at a larger display. It may present as muscle pain, and can require surgery if not corrected.
Get your ergonomics right, or you may have regrets years later, when you're older.
What!? This world is a truly bizarre place.
Yes, I can compile a formal business document from disparate programs and formats all on one screen and there is a bit of an advantage - mouse movement - but that's trivial compared to spatial awareness. Getting something from Excel into PowerPoint while checking Outlook incoming stuff and shooting the cow patties on Teams is a different modus operandi. That's why I relate what I do to being more like an F1 machine than a long-haul key puncher. When I'm on, I'm burning gas and hauling ass.
When I'm off, I'm decompressing in ways that are not exactly customary.
I used a 1366x768 monitor on my desktop for a couple of years and this was a huge annoyance - almost every single site assumed that i was on a tablet. Firefox allowed me to "zoom out" the sites which helped somewhat but then pretty much everything was a combination of tiny letters surrounded by tons of empty space.
Worse, i remember checking various stat sites at the time and 1366x768 was the most common non-mobile resolution!
So many years of talking about responsive web frameworks and half the sites simply don't care.