Open side-bar Menu
 Embedded Software
Colin Walls
Colin Walls
Colin Walls has over thirty years experience in the electronics industry, largely dedicated to embedded software. A frequent presenter at conferences and seminars and author of numerous technical articles and two books on embedded software, Colin is an embedded software technologist with Mentor … More »

Three degrees of freedom

 
November 15th, 2018 by Colin Walls

Developing embedded software used to be easy. Actually, that is not true. It has never been easy, but certain matters were simpler. Embedded developers have always needed more control of code generation because, as I am often heard to chant, every embedded system is different, and the priorities and requirements change from one to another.

It used to be broadly a choice between speed and size of code, but it is no longer that simple …

Although embedded compilers typically offer more fine grain control, optimization of code is usually a matter of selecting a bias towards speed or size. It just works out that, most of the time, the fastest way to implement an algorithm is also biggest and the smallest is slowest. There are exceptions, but they are not so common. Much the same principle applies to data, where it can be packed (smallest) or unpacked (fastest). This is commonly specified by a compiler optimization switch and may be overridden for specific objects using the (extension) keywords packed and unpacked.

When designing hardware, developers have a very similar choice. They code using a hardware definition language (like VHDL or Verilog) and use a synthesis tool (which is similar to a compiler) to implement the design. In the same way as with software, there are trade-offs between speed and size. However, for some time, another factor has been taken into consideration: power consumption.

Power is no longer purely a hardware issue, as the software increasingly has an influence on the power efficiency of a device. Conventional code optimization has an influence on power consumption. Small code means the device needs less memory, which reduces power consumption. Alternatively, fast code causes the CPU to burn less power getting the job done. This is not an easy matter to get the balance just right. I do wonder if “optimize for power” will soon be a compiler feature …

So, it may be concluded that instead of a two-way tension between speed and size, it now goes three ways: speed, size and power.

It occurred to me that there is an interesting analog to this situation in another technology area that interests me: digital photography. Traditionally, having selected a specific speed (i.e. sensitivity) of film (indicated by its ISO number: 100, 200, 400 etc.), for each picture the photographer needed to balance between shutter speed and aperture. This could be a problem if there was an unexpected situation when an object is moving quickly (i.e. need fast shutter speed to capture) in a circumstance where a good depth of focus (implying small aperture) is also required. Nowadays, the photographer can choose the ISO for each image – i.e. they have three degrees of freedom. Automatic cameras often let you set speed/aperture and let the other one float. I, personally, have yet to see one where both can be set, and the ISO adjusted automatically to get a good exposure.

Logged in as . Log out »




© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise