Managed is the New Native

May 15, 2012 in .NET, Elements, Java, Oxygene, Windows

It used to be that native development referred to compilers outputting code ready to execute on the target CPU. At the time, the pool of possible CPUs was small, and the alternatives where runtime interpretation. Since then a lot has changed. Modern CPUs have multiple modes of execution (protected mode, long mode, etc.) and optionals instruction sets (SSE4a, SSE5, etc.). A “native code” compiler must choose a minimum level to target, ignoring “higher” level functionality available on newer CPUs. This leaves programs unable to take advantage of the latest CPU innovations, and often running in a legacy compatibility mode on the very CPU it targets.

What is more native on the latest 64-bit processor: 32-bit x86 code or intermediate code just-in-time compiled to take advantage of the 64-bit architecture and latest SSE instruction set? Not so simple, is it?

Managed code platforms such as Java and .NET have a “native” advantage that no unmanaged (so called “native code”) compilers can match. Because managed platforms distribute their programs in an intermediate format (Intermediate Language [IL] Assemblies for .NET and Byte-code for Java) the Just-In-Time (JIT) compiler is able to compile the program to specifically target the CPU it is running on. This means the program can scale up or down as necessary on each CPU, even a CPU that wasn’t released when you wrote the code.

A misconception is that managed applications are slower than unmanaged applications, or that managed applications are interpreted at runtime. A managed application is JIT compiled into highly optimized machine code immediately on execution.

The alternative for non-managed compilers is to compile to multiple targets and then provide software emulation for CPU instruction sets on older CPUs. The result is a more bloated program that contains emulation code that is rarely used, or a program that is unable to take advantage of the latest CPU optimizations.

Now that managed is the new native, it leaves developers free to focus on what is really important: A Native User Experience.

52 responses to Managed is the New Native

  1. If I were to read this a few years ago, my response would have been “this man is mad”, but the reality is that .NET is the future, for how long? not sure.

    Good read!

    • .NET and Java are as much of an improvement over existing non-managed tools as those tools were over writing in assembly: You get huge productivity improvements and you get to take advantage of numerous compilation optimizations created by people who speed all day everyday working on just those optimizations.

  2. I fully agree with you Jim.
    At a forum entry I posted that in a short time period “native” code will be disappeared in many sections of programming. Of course there are ways to build native applications directly through cross-compilers, but we only have to look at the wide range of usage of Java’s Runtime to see, that native compilation isn’t done everywhere. With .NET a modern approach is given to us – the developers – to build tools that will run at optimum performance on a wide range of devices.

    I think the next step is to bring in a platform, often not stated when listing Windows, Linux, Mac, iOS, Android, Windows Phone, Blackberry, … . This platform is “The Web”. There are approaches like Smart Mobile Studio to bring higher languages to the browser, but there aren’t many such projects. Why are there so many HTML5 applications out there? Right, because the browser is available everywhere, but the tools to build such applications weren’t implemented to develop such big products. That’s because I believe in the success of Smart Mobile Studio and the projects out there aiming for the same result – bringing higher languages to the www.

    Just imagine a tool like Smart that compiles Object Pascal, C#, VB, etc direct to everywhere-running JavaScript and HTML5/CSS3.

    • The web is an interesting platform. It isn’t the best solution for all situations, but it offers a number of useful advantages that you don’t get elsewhere. It is important that we as developers consider all the different platforms and solutions and choose the best one for each situation. Blindly continuing to use one tool, platform or approach in all situations means you rarely have the best solution.

  3. Same topic, same company again? I thought they were more of informative posts but it seems someone is pushing an agenda.

    It is a business, so nothing wrong with that.

    • I actually wrote this in response to marc’s post about the importance of native user interfaces. He was focusing on user interfaces, while I am talking about CPU execution of code. Two important points, but different points. I guess we are both in agreement in the importance of “being native” but there is more to being native then one might consider.

      • Well, I think it is not unreasonable to state that Marc does have an agenda in that he represents a product that is .NET runtime based.

        • marc said on May 16, 2012

          again: what does “have an agenda” mean, really? if it means, “have a consistent world view, openly talk abut it and create the products that celebrate it”, then yes, i have an “agenda”.

    • marc said on May 15, 2012

      i always love it when consistency is confused with “having an agenda”. Yes, Jim’s post certainly elaborates on ponts i made before — but doesn’t that make sense? would you rather see us as a company behave as if the left hand did not know what the right hand was doing, contradicting ourselves with every word we say? i didn’t think so.

      in a similar vein — when i voice my opinions on these topics, i often get accused of saying the things i do because they “fit” our products, the implication being that i’m only saying them to push our products, not because i believe in them. No-one ever considers the oposite causality: maybe our products reflect the ideas i talk about *because* i/we believe in these things, and thus design our products around these philosophies? isn’t that a whole lot more more plausible? ;)

      marc

      and p.s.: what’s wrong with having an agenda, really? that just means you have a plan and you stick to it. which is a good thing. ;)

  4. I’d rather go the path to release different versions for each platform (32/64-bit for Windows, Linux, Solaris, MacOS, etc) and patch at runtime the executable to use different versions of functions depending of the processor features available (like what FastCode does).

    • I agree it is important to develop for the specific target platform, especially when it comes to user interface: One UI never fits all for real native apps.

      There are some times when it is good to specifically target different CPU architectures though. The reality is most developers don’t need to, even though they like to think they do. A developer needs to know the performance impact of the choice of managed vs. non-managed code, and most importantly they shouldn’t assume non-managed has some magical speed advantage.

      Always choosing managed code is the modern day equivalent of always choosing to code in completely in assembly.

  5. I don’t agree with this at all. I’ve been hearing the exact same statements from proponents of managed languages since around 1998, in the context of Java. Now, 14 years later, it looks like managed languages will never catch up to native. Apple understands this, and that is the reason why they are using a native environment. Microsoft understands this and is going native, back to COM. I think the idea of compiling at runtime is just a flawed one. The proof is the pudding. I don’t care about benchmarks, but in my day to day experience managed apps on the client just suck compared to their native counterparts.
    Have you ever thought about how great Eclipse would be nowadays had they chosen Qt/C++ instead of Java?

    • Good, I’m glad you don’t agree with me. That always makes for a more interesting discussion!

      To be clear, Microsoft isn’t “going back to COM,” they never really left COM. Instead they are offering a common COM interface for all three development options: Managed, scripted and non-managed. .NET is just as important to Microsoft today (if not more so) then it any time previously. This is the first time the launch of a new OS had .NET as a key part of the platform development paradigm. Also, scripting with JavaScript has been elevated to the same level as .NET and VC++.

      Also Apple is moving away from manually managed memory to automatic reference counting (ARC), which is more of a managed approach.

      Any problems I’ve heard anyone express about Eclipse has nothing to do with it being written in Java, but instead in design philosophies. You can write a bad app (not saying Eclipse is bad) in Qt/C++ just as easily as you can in Java.

      • marc said on May 15, 2012

        i’d consider Objective-C/Cocoa “half-managed”, leveraging a lot (but not all) of the great things that a managed environment can offer, such as a great dynamic and powerful object runtime. ARC isn’t really the biggest part of that; as it’s really just a nifty compiler technology that simplifies the manually typed code.The Objective-C Runtime is waht really makes the platform almost-managed.

        as i wrote on https://plus.google.com/113924775815750311571/posts/PUo3YN6d3Sr:

        (c) i’d consider Objective-C somewhere between managed and unmanaged. It’s not really (yet) a managed environment like .NET or Java, but it’s not really entirely unmanaged code, either. It has a runtime. It has a sophisticated system for passing messages between objects that allows for control and “feels” a good deal more managed than Delphi or C++’s approach of “here’s a pointer, trust it’s a method and JMP to it”. To me, that makes it more managed than unmanaged.

        Objective-C — as of its latest revision introduced with Xcode 4.2 and ARC — also takes away the burden of memory management, essentially providing a “best of both worlds” of deterministic memory management under the hood (if you’d disassemble ARC code, it would look as if all the memory management were hand coded, as you’d do in previous versions, or as you’d do in Delphi, calling Free. But on the code level, you can write code as if it were garbage collected, not worrying about releasing objects).

        One could say Objective-C has many of the upsides of managed languages, without technically being a managed language.

        • For Delphi you can do something similar in code with reference-counted pointers, more at

        • VRV said on May 17, 2012

          In my view, Delphi (pure) is also between both managed and un-managed world , because Delphi already supports GC with string objects and interfaced objectd while emitting win32 , win64 , mac32 and ios executables.

          According to road map Delphi will be a managed Nativity cross-platform solution,(at least in the future)

          • Exactly. No one would argue that Delphi isn’t “native” because it has some managed aspects.

      • Well, yes, I guess the question is what is managed and what is native. I love Go for example, and I would consider it native although it is garbage collected.
        The two things I am against is 1) compiling on every run and 2) having to install a big runtime on the client’s machine.
        @Jim, as for Qt vs Java, you are right, Eclipse is kind of a mess due to various design decisions. You can write slow programs in C++ (or other native languages), but you really have to go out of your way to do so ;) In Java (or .NET for that matter), it’s the default.
        ObjectiveC ARC is a prime example I admire Apple for. Instead of going the mainstream way of adding a GC, they add some compiler magic and solve the problem for developers while preserving the user experience. I didn’t even think that was possible, but they did it.

        • As marc pointed out there is actually a runtime on iOS, but it doesn’t provide garbage collection.

          One of my main points, which I will follow up on later, is that managed isn’t slower then non-managed. It is just different. There are some scenarios that favor each approach, but it is silly to stick to just one approach because it is faster in some situations.

          The beauty of the .NET framework is Microsoft is pushing it out on all supported desktops, so we don’t need to install it on our client’s machine if they are running Windows (and have Windows Update turned on). If you are using Java then you do need to install it on Windows, but it is installed on the Mac or Android for you.

  6. This article just retreads the same old propaganda that has ALWAYS been peddled w.r.t the supposed efficiencies and optimisations of “managed” vs “unmanaged” (although actually it has nothing to do with managed runtimes and is merely an aspect of deferred compilation, which modern managed runtimes also happen to incorporate – there’s no reason why an “unmanaged” compiler shouldn’t incorporate this sort of technology). Correlation, not causation (or dependency).

    Sadly for those expounding this view, the theory simply hasn’t been born out in practice.

    Whatever advantage is gained from the fact that the compiler instructions can be more optimally targeted to the execution environment is lost to the costs of the managed runtime itself.

    The ideal would be to have deferred compilation and an unmanaged runtime – as I point out above, there is no reason why deferred compilation HAS to be dependent upon a managed runtime. It requires only an intermediate compilation product and a deferred compilation boot-strapper, but that does NOT have to involve a comprehensive managed runtime environment.

    The idea that Microsoft never gave up on COM is specious nonsense. Microsoft did their damnedest to convince everyone that .NET was the future and that it would replace COM. The fact that they found it impossible to follow through with this even themselves and have had to back down from this ambition is NOT some sort of supporting argument for the notion that they never intended COM to go away.

    The renewed importance of COM (yes, it IS a renewed importance) is a marked failure for .NET, which is why you and others who backed the managed horse are so desperate to re-write history in this way.

    As for the idea that ARC is a managed approach … what a load of nonsense. If ARC is “managed” then Delphi has been “managed” since Delphi 2 (ARC for strings and dynamic arrays) and Delphi 3 (ARC for interface references, extendable to broader intent by judicious use of interfaced objects).

    You are spinning so hard and fast that you are making yourself dizzy! LOL

    RemObjects are backing your future on the managed platforms. Good for you. But stop trying to convince us that it is the inevitable and right path by concocting these ridiculous arguments and justifications out of nothing. It just comes across as if you aren’t really all that convinced yourselves and are simply trying to justify it to everyone else to make yourselves feel better about it.

    • marc said on May 29, 2012

      “RemObjects are backing your future on the managed platforms. ”

      that’s a rather self-filtered view of what we do in order to paint this thread as propaganda. Your accusation would make sense of we were a .NET shop, but we’re not. RemObjects Software is not about managed platforms. RemObjects Software is about platform-native developer tools on ALL platforms. As you full-well know, that includes Win32/Win64, and Mac/iOS, both of which are not managed.

      please also note that at no time did i (nor Jim, afaik) claim that ARC makes Objective-C managed. i made the point that the Objective-C *runtime* and how it works and manages objects makes Objective-C almost feel like it’s a managed platform, and provides some of the same benefits that i like about managed platforms (even though it isn’t really managed). That as nothing to do with ARC.

      That said both ARC *and* the very awesome model of Autorelease Pools, pre-ARC makes Objective-C memory management and object life-cycle management a lot simpler and intuitive than, say, Delphi or C++. (and if you disagree with that, i’d wager you have not *really* looked at how object live-cycle management works in practice, in Objective-C apps). Therefore i’d say even pre-ARC Objective-C was light-years ahead of Delphi in that regard.

  7. I definitely see the covenience of managed environments, but I will wait to believe it’s better performing when I start seeing games like Call of Duty, Battlefield, etc. being released as .NET applications.

    • There are managed games written with .NET and Java, but games are a specific application that may actually require non-managed tools like assembly.

      It would be silly to design all applications the same way games are designed. Your application is more likely to be talking to a database and parsing XML then Call of Duty. Creating your own database drivers and XML parsers from scratch for each application isn’t a good approach. Find the platform and tools that has the best performance for the problem you are solving.

      Don’t assume “native” is faster. Test the specific problems you need to solve.

      • Yep, much safer to assume managed is faster, right ?

        Everything you said about managed being more efficient and the reasons you gave has been trotted out time and time again, but the real world experience simply doesn’t bear out the theory.

        For one thing, many of the super optimised deferred compilers don’t actually exist (why would they ? why would Microsoft put effort into creating super-efficient compilers for the latest x86 processors in .NET framework, when those same x86 processors are themselves designed to run the older x86 code anyway?). Leave the super-optimised code for the hand crafting assembler jockey’s – I’m pretty sure that’s the predominant attitude among the managed framework vendors, precisely for the same reason that you say that managed frameworks aren’t suitable for all tasks.

        i.e. in practice, the deferred compilation that supposedly produces these super-optimal apps is little better than the general purpose output of “native” compilers, except that you also have to take the hit of the managed runtime scaffolding AND the upfront compilation, at least once and sometimes more (if the JIT cache get’s dirty or purged for whatever reason).

        Also, please stop conflating deferred compilation with managed runtimes. If there were genuine benefits to deferred compilation it is a technology that could very easily be incorporated in non-managed compilers. Deferred compilation does NOT require or rely on any aspect of a fully managed runtime. It is merely the case that only managed environments have incorporated this technology (to date).

        One has to wonder if perhaps they are trying to compensate for something….

        • Don’t assume either way, test and find which scenarios are better for managed vs. unmanaged. I only said that managed, specifically deferred compilation, gives you the potential for “more native” code in general than unmanaged. Of course the code being native isn’t the only measure of performance.

          You are right that deferred compilation isn’t unique to managed frameworks. In fact we’ve seen it for years on the Unix, Linux, etc. when software (or even the OS) ships as source code and is then compiled for the target hardware.

      • “Don’t assume “native” is faster. Test the specific problems you need to solve.”

        That is something that requires an exceptionally broad experience base; but, Yes! 100%.

    • To Kyle Miller’s list I add Microsoft Office.

      • I think the main reason Microsoft Office is still written in VC++ is because it has a huge legacy code base. Throwing out working code because a new tool is available is rarely the right choice.

  8. @Florian: about SMART, this is a great project. But for most common use of data process with arrays, JavaScript is still very slow (unless you use w3buffers – see http://op4js.optimalesystemer.no/forums/topic/executable-speed-in-ralation-to-delphi ), and lacks some of arithmetic (try to implement zip compression/decompression for instance). But it is going into the right direction.

    About the article, I totally agree with Jim McKeeth remark.
    It is a marketing argument to state that using a JIT or NGen pre-compilation can be faster than pre-compiling stuff. Just take a look at all the optimizations appearing with each .Net framework release.
    It is always “faster than fastest”. Sounds like “cleaner than cleanest” for washing powders ads…

    Compile-on-target is not the biggest difference IMHO, between managed and unmanaged .
    I know that nerds state that a GenToo distro will be faster than a plain Ubuntu/Debian with pre-compiled packages, just because it will be compiled for the exact hardware it will run on. If you have time for the compilation, like drinking litters of coffee, that’s OK. But most end-users won’t see any difference.

    Speed is not about compilers, but mostly algorithms and data structures.
    Your argument of “latest compilation means faster” is truly a false argument.
    Or, at least, a marketing argument, not a technical one.

    • Speed is not about compilers, but mostly algorithms and data structures.

      I completely agree with you on that point.

      Your argument of “latest compilation means faster” is truly a false argument.

      I wouldn’t say latest compilation is the most important factor for performance, but I would say that the more you know about the target system when you compile, the more optimizations you can make.

  9. The native approach under Windows worked good because of i386 compatibility and Microsoft’s slow adoption of Intel’s features.

    IBM did not introduce a ‘managed’ runtime for assembler 30 – 40 years ago, because they had been funny people. The moment you have to address different processor architectures more and more decision move to the runtime and have to be isolated from the application. Garbage collection in adult technologies has been state of the art for 20 years now or longer. The Wintel world has been behind the times, but the challenge grew and with the challenge well known and proven concepts have been adopted. The point is the isolation of the application from the processor. This is the advantage for those who have to run the applications, maintain and service the users.

    How does someone optimize floating point speed in Java. Change the runtime and recompile. Thats all.

    A friend of mine once implemented a special implementation of a visualization on an HP vector engine in order to avoid purchasing an SGI cluster. A good example for benefiting from native code. Pressing F9 in Delphi is not native development. this is still ‘VB’ nor is just pressing F5 in VS a lot better. This is hobbyist level.

    Let me say C + assembler and optimizing against one target architecture is native development. I think these special skills will be required but by less and less people. Someone who thinks native development does make sense, should apply at a technology trust or found a company with the aim to become one.

    Get to know the evolutions of the Intel processor, the different strategies in branch prediction and try to avoid pipeline flushes. Some people still believe the execution plan of a query is similar to a SQL queries behavior at runtime … in the end more and more decision are based on statistics of the applications execution behavior retrieved during runtime:) The goal changed, the fast single application is the traditional goal in the Intel world on the Windows platform but today a lot more than ever the steady flow in execution is moving into focus.

    Jim pointed out, clearly distinguish between the runtime environment and the sales term Java or .net. Some drawbacks do come from the language design of C# and Java. Of course a good developer does have a look at the ‘opcode’ generated and will adopt the implementation.

    There are many myths in IT – one technology/one language, we rely on one DB technology and therefore decide for a mainstream solution, because a mainstream DB is complex so all the other DBs are either complex or not capable …

    Maybe Delphi is moving managed. This is my impression and maybe one reason for this article beside the questions in EMBs survey. No one knows.

    • Exactly. True native development is manually optimizing for the specific target architectures. This is something most developers don’t need, and don’t actually ever do. Even with Delphi it has an optimizing compiler and makes some guess about what CPU it will run on. The problem is it has to make general guesses because it has to target all CPUs with each compile.

  10. Consider how many .NET, Java, Visual Basic etc. runtimes we have on our PCs right now.

    And each new version just gets bigger, but we cannot not throw the old ones out.

    If a future Delphi offers an optional “Managed Lite”, as Barry Kelly once suggested, fair enough. The executable as a whole still targets a level of the platform that is likely to live longer.

    Whilst writing to target a runtime is OK for consumer apps, my worry is it is too fleeting for long-lived applications.

    • While Delphi doesn’t have a “runtime” like .NET it does have system requirements. The VCL and RTL wrap a number of parts of the Win32 API. Throw out the old Win32 and Delphi quits working.

      On Windows 8 Microsoft has announced the WinRT as the successor to Win32. Luckily they are not throwing out Win32 so Delphi applications will continue to work. But who knows for how long?

      Pretty much any programming language requires a library to make it useful. Microsoft does a good job making sure the .NET framework is on most Windows computers, and the Mono framework can easily be installed on most Linux and Mac computers. I wouldn’t be surprised if combined it is installed on more computers then the Win32 API.

      Managed is the future of native applications. As you mentioned Berry Kelly is a fan of memory managers. Of course that is just one aspect of a managed platform, but one we might see in the future for Delphi. A step in the right direction.

      • Sorry Jim, but ‘managed is the future’ might be the case for you.
        But for many of us, it is not.

        • My original point was not that “managed is the future” but that managed is more future resilient for native execution.

          A non-managed application can only run on a future CPU architecture in a compatibility or legacy mode, while a managed application can JIT to run natively on a future CPU. Take 32 vs 64 bit as an example. Until the non-managed compiler adds support for 64-bit you cannot support 64-bit, and then when it does you have to rebuild your application.

          With a managed compiler you write your code against the runtime, and then there is no need to re-compile. The JIT process compiles your code to native 64-bit instructions on a 64-bit platform.

      • That’s luckily for Microsoft they’re not throwing out Win32. Faced with strong challenges from OSX, iOS and Android for them to do so would be commercial suicide. Win32 will be with us for a very long time for that reason.

        I would. Very surprised indeed.


  11. x86 code running on 64-bit processor IS NATIVE because the instruction set for x86 is right there. The new x64 processors doesn’t emulate x86, neither do the 64-bit OSes (the 32-bit API is still there also).

    • I didn’t think I said 64-bit processors emulated x86. Just that 64-bit code was more native because it was able to take advantage of more of the CPU architecture.

  12. “Managed is the New Native” is just a fancy title, but utterly wrong in essence. It’s just as “false” as “Javascript is the Assembler of the web”, or any other blog title that starts with “Top N XXX (ways, books…)”.
    But this article is interesting because Marc H. published something similar recently. Seems to me that RemObjects is on crusade to prove something, or maybe it’s a marketing effort.

    • Actually I was debating with marc about this point when he published post about native user interfaces. I completely agree with him about what makes a native user interface, and how important that is, so I didn’t want to post this article too soon after because people might misconstrue this as a disagreement.

      “JavaScript is the Assembler of the web” is an interesting idea. If it isn’t JavaScript, what would you say is the assembler of the web? I never really considered what that would fall too before.

  13. After having existed for so many years, Java and DotNet only seem to be useful on servers.

    The programs that I use on a daily basis are all unmanaged, and they would probably suck big time if they weren’t. Let’s see which applications I have open at the moment:
    Outlook, Total Commander, MsWord, Excel, WinRar, Notepad++, Everything, Google Chrome, Remote Desktop, Delphi, HostMonitor, uTorrent, and HeidiSQL. Besides my currently open applications I often use Photoshop, Cakewalk Sonar, 3dsMax, TortoiseSVN, QGis, PostgreSQL, MySQL, FireFox, Putty, Acrobat Reader.

    Hmmm.. none of those is managed. In fact, I can only think of two managed pieces of software that I make use of regularly: Aptana Studio (Eclipse based), and GeoServer. Both of those are Java, and would probably be nicer to use if they weren’t.

    For almost all commercial or open source DotNet application, there’s a more useful unmanaged counterpart. Is that because DotNet is not around long enough, and we should give it some more time?

    DotNet may be “the future”, but unmanaged software is still ruling today’s world, or at least mine.

    • I am not saying there is no place for unmanaged applications. Obviously there is. I am just saying that managed is just as or more native than non-managed.

      On the Windows Platform Java suffers from not having native widgets. With WPF and .NET that is not the case. While WPF is an abstract widget framework that is not anchored to Win32, it is a full robust and completely native widget framework that builds beautiful windows applications.

      • I see the potential of managed applications, and every time I work with C# I’m positively amazed how easy it is to work with. But somehow there are hardly any DotNet killer desktop applications, and it amazes me.

        It cannot be that the DotNet platform is still too young or immature. There’s no shortage of DotNet developers, or a lack of talent, and there’s no shortage of CPU power or network bandwidth. Garbage Collection should make it easier for starting developers to create software, and JIT compiling should be able to run super optimized per processor and environment type. There are more books about C# than there are Delphi developers, and every SDK or library has C# support.

        Still (and that’s why I’m so amazed), there are no real killer DotNet applications. I really can’t think of a single desktop or server application that I use regularly that’s written in DotNet, and I usually open executables in a hex editor to see what it was written with.

        Is it because for more complex software you need more experienced developers, and those are generally older and thus biased to C++ for example? Or that C++ is used by older experienced developers, and they just write better software?

        • I use Paint.net regularly and it is written in .NET. I believe it uses NGEN during install to build a native executable for the target platform.

          My point is not that all desktop applications should be written in managed tools. My point is that the term native is a misnomer when applied to non-managed development. A so called “native” non-managed compiler just makes a best guess for the CPU it is running on and usually doesn’t run natively.

          As your point out there are a lot of other criteria in choosing a tool to develop desktop applications with. Native code execution is obviously not the biggest determining factor.

  14. Every layer above machine code brings performance issues and that is an almost universal truth. If unmanaged code was this good, Blend wouldn’t have been a performance disaster. High end games, OSes, database engines and products like Office would have been written/rewritten in it. As it stands, unmanaged code is acceptable for non-performance based software but as soon as speed or resource management is necessary out comes C and assembler to get the job done. Those that cannot write either well likely believe posts like this.

    CPU features… gah… how about getting dotnet to run fast enough to not see screen refreshes on complex forms or stop sucking so much memory in complex server apps it borders on painful. After that we can talk about using the latest CPU features which amount to a tiny fraction of performance for most apps and services.

    • marc said on May 18, 2012

      “Every layer above machine code brings performance issues and that is an almost universal truth.”

      yes. real men code in machine byte code. everything else is for sissies.

    • Programming is all about abstractions. Obviously we don’t all write in machine code. To claim that all abstractions lead to slower program execution is to claim that all developers are smarter than the ones who build the abstractions they leverage.

      There are two reasons not to write in machine code: 1) You get more done using a higher level of abstraction 2) Someone else has put the effort into optimizing the compiler and runtime libraries that you use.

      If there were no abstractions beyond machine code then all development would take forever and every developer would have to debug their own personal hashtable implementation. That is not the real world.

      I’m assuming the .NET form issues you are referring to are with WinForms. If you want good .NET form performance check out WPF.

      • marc said on May 19, 2012

        And incidentally — WinForms sluggishness has *nothing* to do with unmanaged code, and everything with WinForms using GDI+, which has no hardware acceleration. Build a UI framework using GDI+ in Delphi, or C++, and it’ll be just as sluggish. Build a I framework using GDI or DirectX in managed code (such as VCL.NET or WPF, respectively), and UI performance will be just fine and the same as in non-managed apps.

        Granted. GDI+ was a crappy choice for MS to use with .NET, as it seemed to “prove” the performance issues everybody expected managed code to have, at the time (except, of course it did not, any more than a stuck hand brake “proves” that a Porsche is a slow car) [yes, i made a car analogy. sue me ;].

  15. Testing posting a comment for marc. :)