![]() pictures or movies, or for the windows that contain GUIs implemented by incompetent programmers in Java with typefaces sized in pixels, instead of using scalable typefaces sized in points, like any decent (non-Java) GUI. Scaling is never needed per monitor, but only per window, either for the windows containing bitmap images, i.e. With the right DPI value, all typefaces and vector drawings will be rendered beautifully. The correct way to deal with monitor resolutions is to set for each monitor the appropriate dots-per-inch value, depending on the monitor size and resolution. Nevertheless, I still cannot understand why would anyone want to use any kind of "scaling" in relationship with a monitor.Īny kind of "scaling" is guaranteed to generate sub-optimal images. Vsync works, but I must choose with which of the monitors.įor about a decade, I have used only 4k monitors, even starting with the early models that were seen by the computer as multiple monitors, because HDMI and DisplayLink could not carry 4k 60 Hz on a single link at that time. On the contrary, some of the ideas on which Wayland was originally based were definitely wrong and they have shown that the Wayland developers lacked experience about how many computers are used.Įven if a part of the initial mistakes have been patched meanwhile, that lack of vision at the Wayland origin makes me skeptical even about the quality level of the Wayland parts about which I do not know anything.Īll my monitors are fixed 60 Hz, so I have never used variable refresh and there is no scrolling stutter. Nevertheless, I have not seen any argument yet that would indicate that Wayland is the appropriate replacement for X11. X11 is very far from an ideal graphics system and I would like to see it replaced by a better system, which would still have to also implement the X protocol for the legacy applications. I have no experience with variable refresh, but I have been using up to three monitors with different resolutions, up to 4k, under X11, for almost a decade, without any problems whatsoever (mostly with NVIDIA GPUs, where their Settings utility simplifies the configuration of a multi-monitor layout). Now, I really want to ask - where are we heading here? Are we really stepping forward? I don't think so. ![]() These missing special features now need to be implemented explicitly as "protocols". Everything works in the exact same way, but things happen in slightly different places, and some features are banned by policy (arbitrarily outlined by wayland devs). It's a large-scale refactoring & optimization of the modern day X11 desktop ecosystem. It's mostly just pure grunting - a showcase of large engineering horsepower. ![]() Some might think Wayland is a smart approach, but it's actually not. We are losing smartness for the sake of grunting. If you read their code, you'll instantly notice that they are becoming more and more labor-intensive, and less and less hacker-friendly. Redhat-funded projects are really leading this trend (I'm looking at you systemd, the project). MVP), I think it does come with a cost - you get a large corpus of low density code that gives a lot of surprises in many different corners. The presenter mentioned "use-case over mechanism/policy", but, given a good mechanism, use-cases are just policies.Īlthough I do understand the value of the current approach in the industry (i.e. I'm not talking about exposing user-friendly configuration options, but is about mechanism-over-policy. Not really a relevant thing to talk about here, but I find there's this trend in Linux ecosystem that people are explicitly implementing more and more policies on the code level instead of making flexible abstractions and data-fying user decisions.
0 Comments
Leave a Reply. |