EV Charger Development: Choosing the Right Hardware Platform and Programming Language

Introduction
We want to build products that last. But lasting doesn’t just mean physical durability. To us, a product is something that offers convenience through well-designed functionality and layout, while being maintainable with minimal technical debt. It operates efficiently without wasting energy, providing both sustainability and trust.
In this post, we share why we chose a Microcontroller Unit (MCU) over an Application Processor (AP) when designing our EV charger, and why we decided to use the C programming language for development.
This post focuses more on subjective, experience-based explanations than on presenting quantitative data. Rather than going into technical specifics, we aim to share the reasoning behind our decisions and the thought process that led to them.
We welcome a wide range of perspectives, including critical viewpoints. You can find the related open-source firmware on GitHub: https://github.com/pazzk-labs/evse
EV Charger: What We’re Building and Why It Matters
As the EV market grows rapidly, the importance of charging infrastructure is increasing as well. An EV charger is a critical device that connects the vehicle to the power grid. It must meet a wide range of requirements — including reliability, safety, and maintainability.
Our first product is an 11kW AC charger that supports ISO 15118 Plug & Charge and maintains full compatibility with IEC 61851.
A charger is not just a power delivery device — it is also a communication and control system that must comply with international standards and meet strict safety and security requirements. In addition to ISO 15118 and IEC 61851, it must support protocols such as OCPP and power line communication (PLC). Compatibility with next-generation energy infrastructure technologies — such as Vehicle-to-Grid (V2G) and smart grid integration — is also a key consideration.
Ultimately, an EV charger is a control-centric system that bridges the vehicle and the power grid — accommodating multiple standards and protocols. It must be carefully engineered to balance electrical stability, cybersecurity, and maintainability.
Hardware Platform: Why We Chose an MCU Over an AP
Note: We use the term hardware platform not only to refer to the processor architecture itself, but to encompass the surrounding circuitry and development tools — the entire hardware ecosystem. This is because the ease of developing application-level products depends less on the processor architecture itself, and more on the availability of development tools and the broader ecosystem from which relevant information can be obtained.
Primary Considerations
The processor architecture was never in question — ARM has already become the de facto standard in both the mobile and embedded markets. Beyond power efficiency, reliability, and maturity, ARM continues to lead over other architectures in terms of development tools, libraries, and overall ecosystem support.
The only real question in selecting a hardware platform was whether to use an Application Processor (AP) or a Microcontroller Unit (MCU). This decision was effectively a choice between running a general-purpose operating system like Linux, or adopting a real-time operating system (RTOS) such as Zephyr or FreeRTOS.
An Application Processor (AP) is a high-performance processor capable of handling complex computations and large volumes of data. In contrast, a Microcontroller Unit (MCU) is a low-power, cost-effective processor optimized for performing specific, well-defined tasks.
An Application Processor (AP) — such as those used in Raspberry Pi or NXP i.MX platforms — is capable of running Linux-based systems and offers ample memory and high processing power, thanks to its general-purpose design. However, it also comes with drawbacks: relatively high cost and power consumption, long boot times, and limited real-time performance.
Category | AP (e.g. Raspberry Pi, i.MX 8M) | MCU (e.g. ESP32-S3, STM32H7) |
---|---|---|
CPU Architecture | Cortex-A53, Cortex-A7 | Xtensa LX7, Cortex-M7 |
Clock Speed | 1.2–2.0 GHz | 160–480 MHz |
RAM | 512MB–8GB | 512KB–2MB (on-chip SRAM) |
Boot Time | 4–15 sec | 100–500 ms |
Power Usage | 3–7W | 0.3–1W |
OS Support | Linux-based | Bare-metal / RTOS (FreeRTOS, Zephyr) |
Module Cost | $15–30 | $2–5 |
Detailed considerations
The first step was to define the minimum memory requirements. Considering the memory demands of various software stacks — including networking layers and application-level features as well as future scalability, we aimed to secure at least 512 KiB of RAM, with 1 MiB as the recommended target.
We set 8 MiB of flash memory as the minimum requirement, but determined that 16 MiB or more would be more appropriate given the expected data storage needs and wear-leveling considerations.
Regardless of whether we chose an AP or an MCU, the use of external flash memory was unavoidable. Therefore, minimizing security risks while maintaining performance was a key consideration.
We also considered using a Hardware Security Module (HSM) for establishing a Root of Trust (RoT), as well as a Trusted Execution Environment (TEE). At a minimum, we aimed to ensure the necessary level of security for EV chargers through tamper detection and protection features built into the SoC.
Note: Most of the security vulnerabilities reported in the media about IoT devices stem from the lack of basic encryption layers such as TLS. These risks can be significantly mitigated by implementing mTLS. The features mentioned above are intended to defend against physical attacks — such as side-channel analysis — that aim to extract secure keys.
After reviewing the relevant standards, we determined that a 100 ms cycle was the minimum real-time requirement at the application level for CP measurement and relay control. Most peripheral devices could be handled via DMA or interrupt-based processing, minimizing CPU load. Given that the minimum clock frequency of candidate processors exceeded 160 MHz, we estimated CPU utilization to remain below 30%. As a result, processor clock speed was not a critical factor in our design considerations.
Fast boot time was an important metric from the perspective of MTTR (Mean Time To Recovery) in the event of system failures. Of course, the ideal scenario is that unexpected reboots never occur — but in reality, such perfection is nearly impossible to achieve. That’s why system design must assume the worst. Even if a fault or bug causes the system to deviate from its expected behavior, it is critical that the failure be detected and recovered quickly.
Although the product was initially designed to run on constant power, we also considered future battery-powered variants. As a result, low power consumption — while not a top priority — was still one of the design requirements. This was also one of the main reasons we chose to implement the software using an event-driven architecture.
Note: As we leaned toward using an MCU, we initially began evaluating SoCs based on Cortex-M33, M4, and M7 cores. However, we ultimately chose a non-ARM architecture — specifically, Xtensa — after further reviewing the ESP32 module. Given its functionality and cost, no other MCU candidate seemed to offer a better fit for this project.
The ESP32 is a low-power, low-cost MCU that supports both Wi-Fi and BLE, along with a wide range of peripherals and communication protocols. We were able to use a module with built-in PSRAM, and its support for both Ethernet and wireless (Wi-Fi + BLE) communication was a strong advantage. The ecosystem is extremely active, and we found no known critical CVEs. Since the selection criteria and evaluation details extend beyond the AP vs. MCU topic, we plan to cover those in a separate post.
Programming Language: Why C?
Given the selected MCU hardware platform, the viable programming language options included C, C++, and Rust.
The only factor we considered when choosing a programming language was developer productivity. Just as object-oriented programming is a paradigm rather than a syntax, we believe that product quality and security are shaped more by architecture and implementation than by the language itself. The language is merely a means to support the underlying design.
Of course, modern programming languages offer powerful features that not only boost productivity but also help prevent developer mistakes. However, given C’s dominant presence in the industry — along with the overhead of adopting Rust or C++ — the choice of programming language was relatively straightforward. Our existing familiarity and proficiency with each language also made the decision easier.
C++ offers object-oriented capabilities and rich abstractions, but its size and complexity were significant drawbacks. Rust provides strong guarantees around memory safety, but it remains relatively immature in the MCU ecosystem and comes with a steep learning curve. Naturally, the choice leaned toward C — the simplest option, and one the team had already worked with extensively.
However, it was necessary to consider how to mitigate the inherent weaknesses of the C language — such as manual memory management, undefined behavior, and limited abstraction — and how to reduce bug density as a result.
To address the well-known memory management vulnerabilities frequently associated with C, we aimed to minimize dynamic allocation and clearly manage memory lifecycles. In subsystems where dynamic allocation was unavoidable — such as the network stack — we maintained strict control over deallocation timing. At the application level, we actively used static allocation for buffers and structures that persist from initialization through shutdown.
While the C language does not offer built-in support for abstraction, we found that its simplicity actually enables intuitive abstraction and even object-oriented design. For example, using opaque pointers and forward declarations, we were able to implement a surprisingly effective form of encapsulation. This approach helps reduce code complexity and improve maintainability.
Undefined Behavior (UB) is something we actively manage through rigorous code reviews, strict compiler warning flags (-Wall, -Wextra, -Wconversion, -Wsign-conversion, -Wshadow), and the use of static analysis tools. Additionally, by adhering to established coding guidelines such as MISRA C, we aim to fundamentally reduce the likelihood of UB in the codebase.
Alongside these practices, we integrated tools like cppcheck, Coverity, and CodeQL into our development workflow to overcome language-level limitations and improve the overall reliability of our codebase.
Closing Thoughts
In this article, we outlined the rationale behind choosing an MCU and the C programming language for our EV charger project, along with the technical considerations that informed those decisions.
This choice was not about declaring one technology or language superior to others, but rather about finding the most optimized solution given our specific constraints and timeline. Every technology has its strengths and trade-offs depending on the requirements and environment — and accordingly, the right choice can vary. We will continue to observe the evolution of both technology and industry practices, and refine our approach as needed.
Have you faced similar challenges or made design choices involving MCUs or the C programming language? We’d love to hear your thoughts — whether you’ve taken a similar path or approached things differently. Feel free to share your experiences, insights, or related blog posts via comments, issues, or any other channel.
We’re looking for people to build with.
Pazzk is actively seeking partners who share our direction and want to build the future of EV charging together. We welcome charger manufacturers, charge point operators, as well as investors, strategic partners, and technical collaborators interested in exploring new opportunities in the EV charging ecosystem. If you're interested, feel free to reach out.
Contact Us