Difference Between Professional Embedded Software and Arduino/Raspberry PI Projects

I'm curious to know how development of professional embedded systems works. Google searching Embedded Software Engineer, you see a general list of required knowledge of C, assembly, boot loaders, driver development, RTOS etc.

However, Google searching embedded software projects online, I see they almost all use Arduino, Raspberry PI etc. All that is required is a basic knowledge of C/Python and electronics. Very few of the skills listed on job sites are required. Why?

If I wanted to create an embedded systems project as experience to look good on a resume, should I follow these projects but ignore using the high level libraries provided by Arduino/Raspberry PI and do everything from scratch, e.g. bare metal/linux kernel?

Edited by Ryan on Reason: Initial post
Because online projects are done by amateurs using materials available to amateurs. This means arduino and rPi along with the ecosystem that rose up aimed at amateurs. And at the end of the day most of those "embedded" projects are a bit of circuitry and some glue code for the libraries that exist for the components used. An easy way I found to judge the quality of an embedded project in relation to how actual embedded code is structured is how often delay() is used.

Whereas the professional projects will have narrower constraints that will preclude those options, or have access to better components because of economy of scale.

Those components will not have the community support available to amateur-grade components.

Also the quality of the software for amateur-grade components tends to be lower quality.

For understanding embedded programming I would suggest getting to know how to deal with programming the IO registers directly based on the spec (setting timers, setting up serial communication, etc.), using and dealing with interrupts, and how to structure your program without delay().
Raspberry Pi and Arduino might come up if you work with early prototyping, because medical companies let researchers use whatever they are comfortable with when performance is not an issue and finding skilled developers is difficult. Automotive companies that care about performance might ask you to implement a math function using a very limited hardware instruction set during interviews.

I would not hire a programmer that only knows how to call an image processing library from a scripted language. Performance aside, you cannot modify an algorithm to get good results by gluing it together with other pre-existing filters. It's not even useful as a temporary reference implementation, because artifacts reflect which hardware the algorithm was designed for and changes in behavior with domino effects on other modules is not something you want in the last minute before an important release.

Useful skills in embedded/mobile:
* Mutation testing (for asserting that the basics works and typos in code would actually get caught in automated tests)
* Handling people who tries to get away with a 0% mutation coverage while arguing that all tests can be automated. (advanced algorithms with fuzzy requirements need manual testing and zero dependencies so that mutations cannot possibly occur)
* Visualization assisted manual testing (for when automated testing cannot be applied due to fuzzy requirements and huge input space)
* Formal verification (some bugs cannot be found by just testing)
* Zero cost safety abstractions (formal verification cannot catch bugs from the operating system or static electricity)
* Criteria validation (the requirements have to make sense in the real world with physics and psychology)
* Reading assembly output (to see if the compiler applied all expected optimizations or have to be applied manually)
* Remote debugging and profiling (because you have to try algorithms with large amounts of fresh input data from different parts of the real world)
* Code documentation (so that it doesn't have to be rewritten after you leave the company)
* Reducing dependency on other modules (unless you want to repair your module every time something changed in the codebase)
* Structuring data as compact byte arrays for cache locality and predictability (reordering input data can double total performance)
* Efficient use of pointers
* Bitwise tricks
* ARM NEON intrinsics
* Multi-threading
* Signal processors (more power efficient than a GPU but also limited)
* UART, SPI, I2C (communicating with circuits)
* Power optimization (reducing average computation times, not just peak times)