Why are interpreted programming languages slower than compiled ones?

I put this to the point without using analogies, because it is not so difficult to describe and understand.

What does a compiler do?It translates the program code into machine language and it is executed later. If a compiler can optimize well, this can come close in individual cases to what one would program by hand in assembler. Sometimes code optimization can come up with even better “ideas”.


What does an interpreter do?It finally has a runtime containing the individual commands of the programming language in machine code, then reads the original program code and calls the individual commands of the programming language.

For interpreters, there is a main loop that reads one line of program code per pass and makes the necessary calls.This is especially expensive for commands that would be compiled only one to a handful of assembler commands or is actually in the machine code that implements the programming language commands. Then the runtime portion of the interpreter loop is significantly higher. Also, an interpreter can only execute the code as it comes from the programmer, a compiler can perform code optimizations because it analyzes the total code in the compiation phase and not as the interpreter with blinkers only goes through line by line.


So much for the essence of the question.In detail, today’s compilers no longer create native code, owe it to the fact that there are many generations of CPUs with different shapes and command sets. You first create an intermediate code that does the step once and for all, to read (parsen) the program line and converts commands, symbols (e.g. variable names) and literal values (numbers, strings) into a binary format, e.g. a byte code for the command, a Address of a variable or the binary representation of the value. This generates code that an interpreter could also execute. For the .NET languages, this level is Microsoft Intermediate Language (MSIL) code.

This type of compilation thus saves the step to convert to machine code and code optimizations and is interpreter-like.The more relevant compilation steps are then made only at runtime, through the JIT compilation (JIT = Just in Time) already mentioned by others. A JIT compiler on the end device can generate code that fits exactly to the existing CPU, i.e. can only use existing instruction sets for AMD or Intel.

The fact that the approach also has disadvantages can be summed up from the following hot message: C’ now also generates native code

After all, JIT Compilation is already performing the kind of optimizations that a classic compiler would do.What comes out is fully compiled code. Not having optimized for exactly the present CPU already available but can still run more smoothly.

Nowadays, the delay of the compilation step at run time mixes the functioning of compiler and interpreter, pure-bred compilation no longer happens, Java puts one more on top and lets a virtual machine run, the JVM, the abstracted more the operating system than the hardware, but is ultimately also a kind of interpreter that executes the Java bytecode.In addition, such intermediate languages as Java bytecode or MSIL are also the basis for a much simpledepilation and also enable conversion of C to VB and vice versa.

If you want to get into the depths of compiler construction, the LLVM infrastructure project is recommended: The LLVM Compiler Infrastructure Project.At its core, a compiler that requires a specific intermediate language LLIR, but also provides tools to build lexer and parser to create this LLIR.

Leave a Reply