Mozilla Developing A New Rendering Engine

By | December 13, 2012


Mozilla Developing A New Rendering EngineAimed at the multi core machines.

Year after year we keep hearing about the new ways and techniques to enhance the overall browser performance. However, while such tweaks are always welcome, the majority of them fall in the line of evolutionary rather than revolutionary changes.

Well, things are about to change as Mozilla has started the development of its new rendering engine called “Servo”.

Why should you get excited?

Instead of borrowing bits and pieces from “Gecko”, Mozilla’s current rendering engine (which will not be replaced by “Servo”), their engineers have decided to build it from the ground up, specifically targeting the multi-core hardware.

Not only that but the whole project will be built using “Rust”, an experimental programming language developed by the Mozilla itself.

Why should you not be excited?

Unfortunately, such things don’t happen overnight and it’s not even clear if Servo will ever replace Mozilla’s current rendering engine.

All we can do now is wait.

[Thanks, Tibor]

[Via: Geek.com]


About (Author Profile)


Vygantas is a former web designer whose projects are used by companies such as AMD, NVIDIA and departed Westood Studios. Being passionate about software, Vygantas began his journalism career back in 2007 when he founded FavBrowser.com. Having said that, he is also an adrenaline junkie who enjoys good books, fitness activities and Forex trading.

Comments (17)

Trackback URL | Comments RSS Feed

  1. Przemysław Lib says:

    Do not know WHY people report Rust and Servo as serious Mozilla projects. Yes they will be what they claim to be, HOWEVER they are not officially backed by Mozilla, nor are they destined to replace code already in use (as opposed to eg. IonMonkey which was from begging “offical” succesor to earlier works).

    Look at text of news. No link to official Mozilla site. Just github link :D

    Mozilla is big organization (not company :P), and rigor about what you can do is not so strong. But do not affiliate with mozilla projects that are not official.

    • Shane Bundy says:

      >Do not know WHY people report Rust and Servo as serious Mozilla projects.
      Because they hold potential for future software development.

      > HOWEVER they are not officially backed by Mozilla, nor are they destined to replace code already in use.
      They are still Mozilla Research projects and are only meant for experimentation. They never said they would replace anything but in the future it might be of valuable use.

      >Look at text of news. No link to official Mozilla site. Just github link.
      Doesn’t matter, it’s still them: https://github.com/mozilla

      >But do not affiliate with mozilla projects that are not official.
      Irrelevant. Mozilla still develop a browser and this is a browser-related project.

      • PC`EliTiST says:

        …or you go Chromium and there you’ve what Mozilla is dreaming of.

        • Shane Bundy says:

          Actually, WebKit runs on a single core. Mozilla want a rendering engine that can split HTML, CSS, JS, etc. across all cores.

          • Przemysław Lib says:

            Split HTML across multiple cores, CSS across multiple cores, JS across multiple cores. Not just HTML on one, CSS on second, JS on third. :D A bit of clarification to highlight completely new approach.

          • Shane Bundy says:

            Allocating a core for a specific workload (CSS, JS, etc.) makes more sense, but we’ll see what happens.

          • Przemysław Lib says:

            It do not make sense.

            Not when you have 8 cores + HT == 16 “workable” cores.

            We have reached the end of what single core can boost, now gains in that regard comes out of better manufacturing (smaller parts == more power efficient).

            Adding more and more cores is the way to go forward.

            So if you can split CSS onto 3 cores that otherwise would be idle? (IF you can do it in performance-wise manner!) Nobrainer answer: Yes!

            What this whole project is about is to explore ways to do just that (by finding algorithms and by creating programming tools!).

            And its no by anny means “official” mozilla project.

          • Shane Bundy says:

            Hyper-Threading only allows programs to ‘see’ 16 hardware threads. It may or may not be a good thing.

            It’s extremely difficult to spread CSS (in your example) across multiple cores. The point I was making was Mozilla are trying to move away from the monolithic approach (HTML, CSS, JS, etc. all on one core) to a more modular one (HTML on one core, CSS on another, etc.).

            Regarding the “smaller parts == more power efficient” claim, define “efficient”. They use less power, yes, but that doesn’t make them more efficient. Power leakage is a major problem when making them smaller and so is manufacturing them if the fabrication process is new.

            This is a Mozilla Research project and, as I’ve said before, this is only an experiment. It’s not going to replace anything yet but it might come in handy if this yields positive results.

          • Przemysław Lib says:

            HT actually allow for executing multiple instructions on same execution pipeline.

            And read article. They aim at splitting whenever possible.

            I mean less power needed to for operation of gates

          • Shane Bundy says:

            >HT actually allow for executing multiple instructions on same execution pipeline.
            I know what Hyper-Threading does. I’m not naïve.

            >And read article. They aim at splitting whenever possible.
            I read it and it makes far more sense than what you’re implying.

            >I mean less power needed to for operation of gates. Which means less heat. Which means smaller chip size.
            This is a software project, not reinventing the computer chip.

            >Anyway, we get to maximum of frequency, and also can not tweak single core for better perf indefinitely. Adding more cores and getting them to cooperate & share hardware more efficiently is way to go.
            Irrelevant to this article.

            >Did I mentioned that GPGPU is becoming serious solution for computing?
            No, you didn’t. And I’m already aware. With Microsoft proposing C++ AMP and with OpenCL taking the helm towards that goal it’s more than obvious that’s the future of computing they want.

          • Przemysław Lib says:

            What I’m hinting at is that increase of number of cores (and shrinking manufactury process) will be a way to increase performance.

            So if we put JS/HTML/CSS in their own threads on their own cores we take 3/4 cores. But if you get 16 of them in your system? It may be better to split more (and more).

          • Shane Bundy says:

            More cores =/= better performance, it’s also about how well they can scale on their own and when they’re workig with other cores.

            You’ve changed your argument slightly. You suggested splitting CSS over multiple cores (again, as-per your example) which is extremely difficult. What you’re saying now incorporates what I was trying to say earlier with putting CSS on one core, JS on another, etc.

            Don’t forget that not everyone has CPUs with many cores. And also, only AMD’s Opterons based on “Bulldozer” (and its derivatives) have 16 cores on a chip. Hyper-Threading does not count as extra cores, only extra hardware threads which, again, may or may not be a good thing.

          • Przemysław Lib says:

            What I’m saying is that the “future” can only belong to multicore solutions. So utilizing as many cores as there are in the system is the way to go.

            (Ofc. this project is not about making sure that more cores is better, but about utilizing hw that is and will be).

            PS Why should any program care if HT is equal to more cores or not? Its HW that should be utilized to full extend.

          • Shane Bundy says:

            Because Hyper-Threading is a cheap way of saying you have more cores than what you actually have. Still, I agree that whatever possibilities are present have to be utilised as best as possible.

            Linux is a hit-and-miss with HT.

          • ToyotaBedZRock says:

            Apple is fixing that with Webkit 2

          • Shane Bundy says:

            WebKit2 only splits the renderer from the program, while Mozilla are aiming to split the JS, CSS rendering, etc. across multiple cores.

        • ToyotaBedZRock says:

          Unless your a heavy user then you get system lockups and random restarts.

          Go ahead and load up 30 tabs then tried to open 15 more rapidly.