Lucio di Jasio, Microchip Technology: Core wars. What’s so important about the core anyway?
Lucio di Jasio, Microchip Technology
The availability of ever-smaller geometry CMOS processes has, in recent years, meant that manufacturers have been promoting 32-bit microcontrollers into segments which have previously been the domain of the 8-bit architecture.
Encouraged by the strong campaigns mounted by ARM, some microcontroller manufacturers appear to have the 8-bit market clearly in their sights and have dedicated their entire portfolio to a 32-bit offering. This article looks at the rationale behind these choices, and the true impact that this has on the choice available to embedded-control designers, by asking the question: ”How important is the core anyway?”
Gate counts and discounts
Embedded control has always been a few steps behind the mainstream computing industry for a number of reasons: First, is the need to control cost, but perhaps, it is that embedded design engineers are inherently more cautious and need to ensure that microcontrollers work in more mundane, real-world applications.
Here, reliability and robustness are valued more than in other fields, and they can only be achieved with good design practices and a deep knowledge of the system. So, whilst the rest of the semiconductor world is moving toward new 22nm geometries and beyond, most microcontrollers, such as those in fridges, washing machines or TV remote controls, are only now breaking the 200nm barrier.
This level of evolution places embedded control at the point where the desktop industry was more than a decade ago: when logic gates started to be relatively cheap compared to the cost of the packages and the analog functions on a given die. It no longer seems ludicrous, for example, to consider using a 32-bit core to solve a ’blink-an-LED’ problem.
Ease of design
It is, of course, all about ease of design. Since an increasing number of applications are using microcontrollers to replace electro-mechanical systems with mechatronics, more engineers are being asked to master the art of designing firmware for embedded control. Microcontroller firmware is at the heart of the intelligence expected from embedded products. As more products need firmware, they also need it to be developed faster, as ”time to market” is the key parameter on which virtually every project seems to be measured nowadays.
Programming languages: How high is high?
The first and most obvious benefit of using a smaller CMOS process is that a lot of more Flash program memory can be made available for a given target price. This can translate into faster development as the space constraints are relaxed and the designer can focus more easily on the key project objectives. Rich libraries can be used to standardise the application development and rapid prototyping tools can help to generate code, automatically, based on templates. Fundamental to this is the gradual transition from Assembly language to C language over the past ten years.
C language is an important step in the move to core independence: if used properly, C language can make the application code more readable, easier to maintain and easier to migrate. It has the power to elevate designs away from the idiosyncratic mnemonics and minute detail of proprietary microcontroller architectures, and reduce all embedded-control architectures to the same abstract, stack-based model.
Crucially, the transition to C has taken the spotlight away from core design. Of course the core still matters, because it has to be fast and efficient, but it has ceased to be a barrier to migration. C language has enabled embedded design to move into an era of greater core independence. The fact that a design is based on an 8-, 16- or 32-bit architecture can become an irrelevant detail when a C compiler is fired at the press of a button.
Splitting hardware and software design
Another positive consequence of the broad adoption of C language programming has been that embedded firmware design has moved closer to the expertise of mainstream computing. Increasingly, companies are finding it more convenient to split hardware and software development.
This is partly due to the relatively high numbers of computer-science graduates, and also because it allows hardware and software development to progress in parallel to achieve a faster time to market. Hardware designers still need to be highly skilled and software engineers need to have acquired some expertise in embedded design or the resulting systems risk being elegant but inefficient.
Efficient systems will use multiple layers of software libraries to provide complex functions, such as connectivity (USB, TCP/IPЕ) and file system support independently of the core. Real-time operating systems (RTOS) can be used to keep the design from becoming an impossible tangle of event-driven, interrupt-based spaghetti.
The most popular RTOS are available for a broad selection of architectures, once more removing details of the core from the designers’ perspective and providing a clean migration path. Eventually, all of these software layers meet the hardware, where it is the vendor’s platform width and depth that matters. The platform must offer the widest possible set of options to provide the design team with a complete matrix of peripherals, memory and compatible packages.
Taking the load off the core
It appears that most of microcontrollers manufacturers’ marketing efforts are focused on row performance expressed in MIPS (millions of instructions per second), despite the fact that the rated MIPS of a microcontroller can mean relatively little in terms of real application performance. For example, software emulation techniques often referred to as ”bit banging” are frequently used in embedded control, but at an extremely high performance cost. On the contrary, the right peripheral set can make a significant reduction in the performance (MIPS) needs of a project.
Ideally, when the right logic is in place and the right peripheral modules and their connections can be configured to perfectly match the application, the processor can be put in standby or other low-power mode, effectively leaving the application’s MIPS meter pegged to zero. Continuous innovation, in addition to perfecting and refining the peripheral set, can make a bigger difference than the choice of core.
Modern 8-bit architectures, for example, offer configurable logic blocks and fine granularity in the peripherals architecture, coupled with high internal connectivity. This allows the modern 8-bit architectures to compete easily with even the fastest 32-bit processor with a less flexible peripheral set.
Whilst the race to smaller geometries has helped to provide higher logic-gate density, leading to larger memory sizes, larger cores and smaller dies, analog is providing the source of new challenges and hard compromises. These are the areas of greatest concern:
- Analog integration: While gates shrink with the geometry of a CMOS process, capacitors and resistors are not shrinking at the same rate.
Modern 8-bit microcontrollers are expected to integrate operational amplifiers, instrumentation amplifiers, A/D and D/A converters, comparators, accurate oscillators and voltage references. However, these analog peripherals are harder to integrate in smaller geometries with the same level of accuracy and performance. With the exception of a basic A/D and D/A pair, very few vendors have ventured down this path.
- Data EEPROM and memory endurance: With a smaller geometry comes more tunnelling. This means low erase/write cycle counts and shorter memory retention, sometimes measured in months rather the decades typically offered by 8-bit and larger geometries. EEPROM emulation in Flash is offered as a palliative at the cost of large areas of RAM, additional complexity and CPU overhead.
- 5V operation and beyond: The smaller geometry cores operate at very low internal voltages. 5V tolerance can be provided on a limited subset of the I/Os by most vendors but very few, if any, have succeeded in offering true 5V operation on more than a handful of pins.
- Extreme low power: As the smaller geometry cores need to operate at very low internal voltages, such as 0.9V to 1.2V, they also need to interface with external peripherals and interfaces operating at 2V to 3.3V. This results in multiple internal rails derived from the device Vdd using on-chip LDOs at the expense of energy efficiency. Very sophisticated islanding and clock-branching techniques have been used to solve the problem but, to date, none of the new cores can get closer than an order of magnitude to the low-power performance of the best 8-bit architectures.
Of course, progress is continuous and there have been some major advances in the recent years and months, and much is expected of the next of microcontrollers, particularly in terms of code density and performance. Any application that needs true computational performance will benefit from using a larger architecture. The problem, however, lies in extending this reasoning to the point where it is believed that any and every application would benefit from using a single, common 32-bit core.
Cars versus controllers
In many respects, choosing a microcontroller is like buying a car: the choice must be dictated by our personal taste of course, but must be reasonably linked to the specific use we will make of the vehicle. For regular, long-distance travel, speed, comfort and performance would be the main objectives and a full size car with a large turbo diesel engine would probably be the best choice.
For urban driving over short distances and parking in small spaces, a compact car with a hybrid or electric engine, would probably be the best choice. Different engines provide very different performances and fuel consumption figures just as different cores in a microcontroller platform can provide different computational performances, power consumption and ease of use to satisfy different applications. Interestingly, despite the huge range of makes, models and special editions that car manufacturers offer today, none of them has ever claimed that a single engine would satisfy every customer.
The analogy with cars can also extend to the after-sales network or support eco-system. For microcontrollers, this includes all of the third parties and their products. The eco-system once more tests the importance of (or lack thereof) core commonality. In a particular eco-system, for example, there may be a wide choice of competing development suites such as IDEs, compilers, libraries and debugging tools.
However, each third-party provider will have worked hard to ensure that its suite is not too easily interchangeable with those of its close competitors. Each will strive to provide the most complete experience to ensure continued loyalty, and revenue, from its customers, regardless of core commonality with competing third-party providers.
Each of the third-party providers, additionally, offers hardware tools which invariably are specific to a particular microcontroller manufacturer as they are constrained by the specific peripheral mix and pin-out options. In fact, it is interesting to note that, as of today, there are no two 32-bit processors available in the same package and compatible pin-out despite that fact that they do share the exact same core.
Core commonality has become a major discussion point, but the selection of a specific microcontroller is still driven by ”non-core” factors which exist irrespective of the core:
- Pin-out compatibility and the universality of the development tools
- Flexibility to draw on the widest selection of peripherals, memory sizes and packages
- The ability to select the right mix of peripherals for each application
- The manufacturer’s innovation and commitment to continually expand the product capabilities
None of these factors is directly dependent on the selection of a specific core, nor do they benefit from having a narrower choice of cores.
If anything, efficiency and innovation demand the widest possible choice in a vibrant market where new ideas and alternatives are continually being explored and made available to embedded-design engineers.
LATEST issue 4/2023