Fun Fact: This same sort of thing also happened on the Classic Macintosh Quadra 840AV, when running in 8-bit (256 color) mode. Playback of realtime video capture reserved color index #243 (a very dark green in the system palette), and ANYWHERE that color was used, it would be replaced with the live video. I created some cool effects using this back in the 90s.
Long time back I'd play PS2 games in a chat window in EverQuest while waiting for mobs to spawn. I had a capture card that would overlay over a particular shade of purple that I discovered while trying to screen shot something from a game. I made an empty chat window in EQ that color and where it overlapped the card's application window behind the video would render. Was super jank picture in picture, but it worked.
> These special surfaces were called “overlays” because they appeared to overlay the desktop.
I have some vague memory of programs whose windows had funky shapes (i.e. not rectangular) also using overlays of some kind. Maybe that's a different sort of overlay?
This reminds me of the time when Quake started rendering inside the start button of the Windows 95 desktop (or maybe Win 98). I wish I could remember the details but it think it was something to do with alt-tab.
That quirky effect is a known quirk of early DirectDraw/WinQuake on Windows 95/98. When a game grabbed the primary surface in certain modes, parts of the desktop GDI (like the Start button/taskbar) could briefly be overdrawn or show the game’s backbuffer due to how DirectDraw managed the primary surface and cooperative levels in fullscreen and windowed modes on those systems.
A green stripe on the right/bottom is usually due to a different issue: interpolation errors in chroma planes when decoding YCbCr video. The chroma planes use a biased encoding where 0 (no color) is mapped to 128. A sloppy YCbCr to RGB conversion without proper clamping can interpolate against 0 at edges, which is interpreted as max negative chroma red/blue -- which combined together produces green. This can happen either due to an incorrectly padded texture or failing to handle the special final sample case for 4:2:2 chroma.
This issue can happen with overlays, but also non-overlay GPU drawing or CPU conversion routines.
Video rendering can still be done with overlays, but it's a little more substantial, involving separate planes with the locations configurable on the graphics card. Look up MPO, Multi-Plane Overlay.
Your green stripe is likely because of the classic combination of unclamped bilinear filtering and a texture that's larger than the output region being used as the drawing surface for the video.
They still use "overlays" - just they're a lot more featureful in modern implementations than "Replace all of one colour with another surface" - so they tend not to have the same limitations and quirks.
MS started exposing some capabilities using MPO in the windows 8 era [0], and they've pretty much always had pretty comprehensive composition pipes in hardware for mobile platforms due to the power/bandwidth limitations meaning compositing the display can be a significant fraction of the total device's performance.
I suspect green (or other block colour) artifacts on video edges are due to bugs/mismatches in specification with the hardware video decoder and how the app displays it, and the bugs that often fall out of that.
Most video compression requires pretty large blocks, normally from 16x16 up 64x64 depending on format, and that may not align with the actual video size (e.g. 1080 does not divide by either). But often implementations still need that extra surface, as things like motion vectors may still refer to data in the "invisible" portion. And it has to be filled with something. It's then real easy to fall into bugs where small numeric errors in things like blending, or even just mix-ups between the different "sizes" that surface has, to cause issues at the edges.
I suspect the other comment about using ANGLE/dx9/dx11-on-12 may be effective as it /also/ causes the hardware video decoder not to be used, which is often a source of "unexpected" differences or limitations that commonly cause errors like that.
As far as I can tell, it's still the case if the video is DRM-ed. Then any screenshots of it will be black square, because the OS can't "see" the video, it's sent directly to the monitor, similar as in the article.
No, it's not that. Usually, the OS does see the video and the compositor still renders it to the screen like normal, but when you take a screenshot, the OS itself is an accomplice here by not rendering that surface in screenshots.
Not true about Android at least — there is secure boot where the bootloader will snitch on you if you unlock it, and you can't do anything about it because the attestation happens in a trusted execution environment, a hypervisor with higher privileges than the OS kernel, that you never get to unlock.
This was always annoying, when you wanted to take an actual screenshot of a high resolution video and use it as a desktop background for example. I assumed it was connected to hardware/software acceleration or something.
Yeah, I used to have a few "live wallpaper" type videos I'd use this way. Around the time AVC-ish algorithms were democratized by DivX. IIRC the player I used had #0000A0 as its overlay color... may have even been the DivX branded player.
...This is the oldest I've ever felt, unsure of my own memories regarding something I have to consult historical records about...
This unlocked some memories. I remember on my system the chroma colour not being green but some very dark shade of grey that was almost black but not really black… something like #010101
In the earlier days of graphics on Linux this was a common right of passage. As far back as 2003 I can remember running an i915 graphics driver, which Intel has always had pretty good open drivers. When you would open a video in VLC and take a screenshot of the desktop, you would be left with a hole as it would grab the overlay compositing color (blue, green, etc.). However, if X11 was doing a video thing it would be filling these areas with whatever video is happening.
This brings back memories of my old HP laptop with an Athlon 64 and a Radeon X200M.
The crappy FGLRX driver only supported overlays (afair) and so when running something like Compiz it would transform the window with the green background but the video itself would stay in place and it would just stick parts of the video on top where it happened to overlap.
I still remember being excited when the open source drivers finally gained support for r300 and could do proper textured video...
If you watch Twitch, you can see that all instances of the same emote in chat animate together. Then I tested this more generally in a web page, and the same thing happens - if the same gif is placed multiple times in a page, all instances of that gif will play in sync even if loaded at different times. I guess there's a similar idea in browsers then, where maybe there's only one memory representation of the gif across the page or the browser.
This was a nice trick to protect text from copying. For instance, student assignments. Students could still use digital camera on CRT display, but 20 years ago cameras were costly and students did not have them. And typing text from scratch was a tedious job. So online served assignments were not shared too fast.
Around 2005 digital cameras were commonplace. Mobile phones also had cameras by then, even if not very good ones by today's standards. Maybe you're talking about an earlier time?
It's not that Cameras didn't exist, more that the technological features were not sophisticated enough to enlist cheating compared to now. A personal OCR Python library wasn't a thing back then.
Not saying that cheating was impossible but uneasy unlike to now where there's a library for everything.
Nobody argued that it was impossible before. Nor did I claim that there were no cameras before 2005.
Cameras just weren't as ubiquitous as today. Unironically arguing that point is silly. They just weren't (I know you didn't, but we're in a comment chain that made that argument).
Yes, in most groups of people, there were a few of them that had cameras readily available, but it wasn't the norm for everyone.
What was available (not just cameras, but ocr etc pp) was a lot less accessable then it is today - where you just point your phone at it and it transparently extracts you the full text of whatever is on screen/lens, consequently the issue got a lot more problematic and widespread, which was the only thing what was put forward here.
This might have been the case where you were, but it wasn't everywhere.
I was about 15 at the time, I'd had multiple digital cameras and had a phone with a crappy camera on it. All of my friends had digital cameras. Myspace had already peaked, Facebook was taking off, and that was largely driven by kids taking pictures.
The idea that the ability to take a photo wasn't ubiquitous for big parts of the western world in 2005 doesn't seem accurate.
I hate how incompetent tech writing and marketing rewrote and simplified mobile phone history into pre/post-iphone. Yes, we did have touchscreens, smartphones and camera-enabled devices many years before the iPhone. Arguably, on several metrics, many Symbian/Linux/blackberry phones of that era are better smartphones than today's iPhone/Androids as defined by hardware capabilities which got removed over time while arbitrary constraints got added on the software front.
I had an Ericsson R320s at the time and that was the web page that convinced me to replace it with a Nokia E70. The joystick stopped working after many years and there was no fix for it. I switched to E90 and I hated the keyboard. It took a lot of force to press the keys and there was no clear key separation to touch-type. But that was the phone which I dropped on the pavement while riding the bike at 80 Km/h and it survived with just scratches! The best 3 phones I ever had.
Now I have a N86 which I'm going to keep until the last 2G goes offline.
Don't know about other markets but the first iPhone didn't sell well especially because it was behind the high end feature phones of the time for a more expensive price.
Good to see the first comment there corrects him, and that it's not actually green pixels; at least for the Intel and nVidia drivers I've used before, it appears to be more of a dark magenta. It could be configurable or hardcoded somewhere in the driver, but I don't think it's fixed in hardware.
The desktop compositors takes the graphics content of all the windows, including their composition visuals, and combines them to form a full desktop image that is sent to the monitor.
The irony is that in 2025, this answer is now wrong again. Starting with smartphones, scanout hardware supports multiple planes/overlays again that are composited on the fly by fixed function blocks. This bypasses having to power on the GPU and wasting memory bandwidth (a large amount of power use in a smartphone).
No longer involves hacks with green pixels though.
Not necessary for blending in video overlays, and wasteful. Well, necessary inside the overlay if that is where the controls should appear.
Alpha blending is two reads, one write per pixel, for the whole affected region (whatever that is, could be the whole screen). An opaque overlay is one read, one write, only for every pixel in the desired rectangle.
The video overlays in question are not drawn by blending into a framebuffer in memory. They're two separate display planes that are read in parallel by the display path, scaled, and blended together at scan-out time. There are only reads, no writes. Modern GPUs support alpha-blended display planes using the alpha channel that is otherwise often required to exist anyway as padding.
As OP noted, using hardware display planes can have efficiency advantages for cases like floating controls over a video or smoothly animating a plane over a static background, since it avoids an extra read+write for the composited image. However, it also has some quirks -- like hardware bandwidth limits on how small a display plane can be scaled.
I had a Matrox Millenium card with a breakout box for capturing RCA, S-Video, and Cable TV; I'd watch TV on my Windows 98 SE2 computer, which was the craziest thing back then, but I always felt like the green-screen like effect was some kind of mysterious bug that I'd better not mess with, or video capture would break. Windows 98 was barely working on a good day, so it felt like the computer was in the process of failing in a graceful and useful way, so I'd better not push my luck.
Every so often you could get a glimpse of the man behind the curtain, by dragging the window quickly or the drivers stuttering, which would momentarily reveal the green color (or whatever color it was) before the video card resumed doing its thing. Switching between full screen and windowed mode probably also revealed the magic, or starting a game that attempted to grab the video hardware context. And of course sometimes other graphical content would have the exact right shade of color, and have video-displaying pixels.
Xine and TVTime worked the same with the overlay video output. If you tried to create a screenshot with X11, you would get a blue/greenish window instead of the video.
You had to use the builtin screenshot function from your video player/tv viewer.
The current MPlayer under OpenBSD 7.7 still has the overlay video output
reply