How much does your computer’s memory have hindered the programming capabilities in the years 80 and 90?

There was no obstacle at all.You cannot look back this way.

Cars reason up to 80 km/h.That was not a hindrance, that was just as it was. You weren’t in the car with the idea, oh, if I could get 130 now.

You didn’t have strawberries in the winter.That was no restriction, that was just as it was. You didn’t have the idea, oh, if there were strawberries now.

When I started my work career in the late years 70 at a German computer company (Nixdorf), which supplied computers for the business market, we, the programmers/designers, had to limit the programs for the Comet accounting package to a staggering Size of 14 KB. (For info, the computers had an internal memory of 64 KB, of which 40 KB for OS and Basic interpreter.)

The programs were written in Basic.If we came across space shortage (and we did), the first comment was removed. We had a special program for this, which removed the commentary and singled out it separately. In the maintenance of a program we could temporarily add that comment again.

Furthermore, the texts that were put on the screen/printer were extracted from a parameter file.We were not allowed to include any text in the program, but all the necessary screen/print texts had to be requested at the head office in Germany. With a bit of handy programming this had a clear positive effect on the size of the program. An additional advantage was that each language had its own text file and that the program was the same throughout the world. Nowadays normal, but for that time very progressive.

If you still have space shortage, you went to use variables (especially strings) several times for other things.At last it became real bit-n * * Ken. Here a few bytes free and there a few more and you could add a statement again.

At one point we were told that our programs were allowed to be 15 KB in size.An extra KB space!! Well, the flag went out. Party.

Because you had to make your code as short as possible, documenting your program was of tremendous importance for maintenance.

TL; DR: The size of the memory had a huge impact on programming.

‘ Obstructed ‘ might be the wrong term because it seems like you want to say that the possibilities were limited.

That was strictly speaking as well, just as the possibilities are limited nowadays: depending on the environment in which you want to run your program, there is a limit to available memory, available storage or the clock speed of the processor Where your program really needs to be reckoned with.Whether that environment is an old Seventies calculator or an army of parallel-running cloud machines, you have to deal with a restriction, not the same restriction, but a restriction is there.

Although I was not working in the period you describe, I have the idea that programmers did not feel limited.Computers have always had the advantage of being in an exciting, dynamic world where new optimizations (more memory, higher clock speeds) become exponentially faster available. You could only be excited if you could get started with a newer model and write software that could exploit these new possibilities (I recommend a nice booklet by Fabien Sanglard explaining that Wolfenstein 3D is actually not possible Could have been because it allowed the hardware to do business where it was simply not developed!).

I would almost like to reverse your question: “Towhat extent has the memory of your computer hampered the programming possibilities in the 21st century?” Today’s programmer cares about writing very different things than the most efficient program, perhaps they find it more important to write a program that is easy to maintain/expand. For instance (as in the period you describe) assembly is the perfect way to write powerful software that is able to wring the maximum out of the hardware, but it will confirm that the required code is practically unreadable. Nowadays we have the luxury that we can choose from different languages that each have a dash for a certain type of application.And often that dash is for convenience in development or readability. If you can program faster and less error-prone, you can also run programs faster with new applications.

To answer your question: More powerful computers provide more powerful development environments.Nowadays we have a lot of fine tools at our disposal that can verify our code, correct it before we attempt to compile it, can post in context while writing, etc. The programmer can therefore afford a little more. And this also translates into better software with fine functionalities that we take for granted nowadays, such as an editing history (those undo and redo buttons were actually not there before!).

I had around 1980 on a Cromemco computer with 2 hard drives, each 5 MB (!) a catalog program in Basic running that the products on cat number were sorting.A Bubble sorting routine took care of this, but there was simply not enough RAM to perform this process. As a solution I had when every bubble iteration written away on the hard drive….. After the program was started with a new list of products (< 100) began the process with constant writing and reading of the hard disk. That took about 5 minutes to sort the list of some 100 products.

In my case, I programmed around that period in both Basic and Assembly.

In my opinion, the memory mainly obstructed the speed with which you can develop software.

However, you were more challenged at the time to make your code as short and concise as possible because it did not fit otherwise.

At Basic, you quickly became limited in the maximum size of your code because it no longer compiled (100Kb source code).

The advantage to Basic is that the compiler optimized it for your specific hardware so you didn’t have to optimize your code yourself.

However, if you’re going to look at Assembly, you’re not as quick with the bottleneck of the amount of memory available, but more with the challenge of making your code efficient and optimal.Also Assembly must be converted by a compiler, but is actually already written in the instructions that a CPU understands.

There is the problem with Assembly, and it will be up to you to ensure that your commands are supported on the hardware that the code will run on.Optimizing you have to do what you will cost a lot more time.

In other words: Basic = faster development (Less consideration of program structure and instruction set because the compiler looks at it) but smaller programs possible due to overhead that the compiler entails, plus you will be less challenged Optimize your code.

Assembly = Highly optimized code therefore, larger (more extensive program in less code) possible, but you spend a lot of time working on the thinking to program it.Much faster program execution.

I wonder how others are looking at this.

Leave a Reply