Well, everything revolves around optimization.I had read something somewhere about a company that let all its processes (or production) go through a self learning system to achieve an even better optimization.
One can view it like this: if first A is done and then B then we can add C.But if C is prepared in advance then the waiting time can be shortened to achieve a result.
In reality, there are hundreds or thousands of parameters to play here, so one can train a model that takes care of the best optimization.
A result may be that e.g. a generator should run less long.Certain processes require less energy because they are used more efficiently, etc.
So it can indeed be a business less money and demand energy consumption.And as an incidental effect there may be fewer emissions.
This is of course very simply proposed.But hopefully I explained the idea well behind it. If anyone has a better vision on this, I would like to hear it!
The field “artificial intelligence” has been given a serious boost.
Via graphics cards chips, academics and researchers were able to accelerate their algorithms, to train or to run a trained model.
3D Rendering and Rasterization (now also ray tracing, using light sources that use real shadows and reflections to make an image look lifelike) need massive parallel computing power (1920×1080 pixels, each 24-bit wide).
That makes graphics cards and ML algorithms math.
We need data, an algorithm and a desired outcome.
If you’ve already used a Movidius neural compute stick on a Raspberry Pi, you can detect live objects.
Here An example with a bottle of water, real time with a USB camera.
The algorithm trained in advance will place a rectangle around the object, placing a percentage.
That is the statistical value according to the model that the object in question in the rectangle is a bottle.
The model was trained with data, pictures of bottles and vials.
That’s the input data.
The model is a CNN (convolution neural network) that focuses on classifying objects.
That’s the algorithm (the math).
The goal was to recognise bottles, so an image of the camera will be analysed to see if an object is a bottle or not.
For example, if 97% is likely to be a bottle, the model is 97% sure.
Here I have explicitly said model, it does not have to be a bottle although there is 97% chance according to the trained algorithm.
The choice of input data, the algorithm and the purpose all hang together and are defined by the designer.
What you put into it, comes out too.
If you stop there, there is also a Salix (see also the ridicule OpenAI claims).
An image classification algorithm will be unusable for speech recognition.
For something like the climate, you need to have data with all the relevant parameters and the quantity should be significant.
And you need to know exactly what you want to know, recognize patterns, and create a model for it if it doesn’t exist yet.
In Belgium, the car is a major dairy cow for the sick budget.
An example where other factors prevail on CO2 emissions, since re-election and income for the person and party are more important.