14.12.2019

Windows Cdc Acm Driver For Mac

Active3 years, 3 months ago
  1. Cdc Acm Comm Driver
  2. Linux Cdc Acm Driver
  3. Usb Cdc Acm Driver
  4. Windows 10 Usb Cdc Driver
  5. Windows Cdc Acm Driver For Mac
  6. Cdc Acm Driver
  7. Usb Cdc Acm
  8. Cdc Driver For Windows 10

There is a working example in the project already (TestFixture.cs in TESTS folder). All USB devices are identified by their USB vendor and product ID (In the example the vendor id is vid_0b6a, and the product id pid_0022). USB CDC/NCM Class Driver for Mac OS X. CDC/NCM (communication device class, subclass network control model) device driver; For Mac OS X 10.6 and later; USB CDC/ACM Class Driver for Windows. CDC/ACM (communication device class, subclass abstract control model) device driver. USB CDC/ECM class driver for Windows 7, Windows 8, Windows. This is a Real Hardware Modem within a USB form without requiring an external power source. For Linux, recompile your kernel for including USB Modem CDC ACM support; or navigate to Device Drivers > USB support > USB Modem (CDC ACM) support. Fulfillment by Amazon (FBA) is a service we offer sellers that lets them store their products in Amazon's fulfillment centers, and we directly pack, ship, and provide customer service for these products.

Windows system has a CDC protocol driver (usbser.sys) to generate virtual COM port. However, it requires a 'Setting Information' file at the first connection.

We have an embedded device that connects to the PC via USB, and it has multiple virtual serial ports (CDC-ACM).

3200 Samsung SCX-3200 » SCX-3200 Wireless Laser multifunction Mono Printer, Scan, Copy. Samsung SCX-3200 series is a multifunctional desktop laser printer manufactured by renowned Samsung electronics company that enables you to easily accomplish everyday tasks quickly and easily with some additional features that support the working system of this printing device.

We have this working on Windows. On the embedded device, we have multiple CDC-ACM interfaces. The USB descriptors declare it as a composite device (class=0xEF, sub-class=2, protocol=1), and it has an 'Interface Association Descriptor' for each virtual serial port. On Windows, we use an INF file that installs usbser.sys for each CDC-ACM control interface (MI_00, MI_02, etc).

However, as we've found, this method doesn't seem to work for Mac. I've found that I can get it to work for the Mac and Linux, by changing it to a 'Communications' class (class=2, sub-class=0, protocol=0), and removing the IADs. (For Linux, testing with Ubuntu, I found that this worked with the Ubuntu Linux kernel 2.6.35-28 or newer. With earlier kernels, only the first serial port worked.) But then, this method doesn't work for Windows.

What method can be used to make a USB device with multiple virtual serial ports, that works on Windows, Mac, and Linux? I think I'd prefer a solution that uses the CDC-ACM standard as much as possible, and avoids the write-your-own-drivers option as much as possible.

Craig McQueen
Craig McQueenCraig McQueen
28.4k25 gold badges102 silver badges158 bronze badges

3 Answers

Windows

The one way I can think of off top of my head would be the device presents itself as an USB hub with multiple separate single-serial-port devices attached to it. This isn't pretty but very bulletproof.

SF.SF.
7,6068 gold badges58 silver badges93 bronze badges

As Apple's drivers don't support composite CDC devices, I'd suggest either making your device reconfigure somehow and making your alternate descriptors plain CDC, or sticking with the composite and using a third party driver (my company makes CDC ACM drivers for OS X which will probably support your device).

Windows Cdc Acm Driver For Mac

It may also be possible to force the issue with a codeless kext.

HasturkunHasturkun
29.5k5 gold badges60 silver badges91 bronze badges

One solution that I've found, which I think could work (subject to further testing on Windows):

Make the device enumerate in the way that works for the Mac:

  • Make it 'Communications' class (class=2, sub-class=0, protocol=0), not composite device.
  • Remove the IADs.

The device should 'just work' on Mac and recent Linux, in this configuration. (For Linux, testing with Ubuntu, I found that this worked with the Ubuntu Linux kernel 2.6.35-28 or newer. With earlier kernels, only the first serial port worked.)

Then, for Windows, modify the INF file for the device, to explicitly load the composite device driver usbccgp.sys. I'm a novice with Windows INF files, but here are the relevant snippets from what I could figure out so far:

..

With the INF file explicitly loading the usbccgp.sys driver, both USB serial ports worked for me on Windows XP SP3 32-bit.

I have done only limited testing so far, so I'd be interested to hear how well this works, or not, for others.

Craig McQueenCraig McQueen
28.4k25 gold badges102 silver badges158 bronze badges

Not the answer you're looking for? Browse other questions tagged embeddedusbserial-port or ask your own question.

(Redirected from Kernel (computing))

The kernel is a computer program that is the core of a computer's operating system, with complete control over everything in the system.[1] On most systems, it is one of the first programs loaded on start-up (after the bootloader). It handles the rest of start-up as well as input/output requests from software, translating them into, i.e. objects that are provided to user code which allow limited access to an underlying object managed by the kernel. A common example occurs in file handling: a file is a representation of information stored on a permanent storage device. The kernel may be able to perform many different operations (e.g. read, write, delete or execute the file contents) but a user level application may only be permitted to perform some of these operations (e.g. it may only be allowed to read the file). A common implementation of this is for the kernel to provide an object to the application (typically called a 'file handle') which the application may then invoke operations on, the validity of which the kernel checks at the time the operation is requested. Such a system may be extended to cover all objects that the kernel manages, and indeed to objects provided by other user applications.

An efficient and simple way to provide hardware support of capabilities is to delegate to the MMU the responsibility of checking access-rights for every memory access, a mechanism called capability-based addressing.[10] Most commercial computer architectures lack such MMU support for capabilities.

An alternative approach is to simulate capabilities using commonly supported hierarchical domains; in this approach, each protected object must reside in an address space that the application does not have access to; the kernel also maintains a list of capabilities in such memory. When an application needs to access an object protected by a capability, it performs a system call and the kernel then checks whether the application's capability grants it permission to perform the requested action, and if it is permitted performs the access for it (either directly, or by delegating the request to another user-level process). The performance cost of address space switching limits the practicality of this approach in systems with complex interactions between objects, but it is used in current operating systems for objects that are not accessed frequently or which are not expected to perform quickly.[11][12]Approaches where protection mechanism are not firmware supported but are instead simulated at higher levels (e.g. simulating capabilities by manipulating page tables on hardware that does not have direct support), are possible, but there are performance implications.[13] Lack of hardware support may not be an issue, however, for systems that choose to use language-based protection.[14]

Cdc Acm Comm Driver

An important kernel design decision is the choice of the abstraction levels where the security mechanisms and policies should be implemented. Kernel security mechanisms play a critical role in supporting security at higher levels.[10][15][16][17][18]

One approach is to use firmware and kernel support for fault tolerance (see above), and build the security policy for malicious behavior on top of that (adding features such as cryptography mechanisms where necessary), delegating some responsibility to the compiler. Approaches that delegate enforcement of security policy to the compiler and/or the application level are often called language-based security.

The lack of many critical security mechanisms in current mainstream operating systems impedes the implementation of adequate security policies at the application abstraction level.[15] In fact, a common misconception in computer security is that any security policy can be implemented in an application regardless of kernel support.[15]

Hardware-based or language-based protection[edit]

Typical computer systems today use hardware-enforced rules about what programs are allowed to access what data. The processor monitors the execution and stops a program that violates a rule (e.g., a user process that is about to read or write to kernel memory, and so on). In systems that lack support for capabilities, processes are isolated from each other by using separate address spaces.[19] Calls from user processes into the kernel are regulated by requiring them to use one of the above-described system call methods.

An alternative approach is to use language-based protection. In a language-based protection system, the kernel will only allow code to execute that has been produced by a trusted language compiler. The language may then be designed such that it is impossible for the programmer to instruct it to do something that will violate a security requirement.[14]

Advantages of this approach include:

  • No need for separate address spaces. Switching between address spaces is a slow operation that causes a great deal of overhead, and a lot of optimization work is currently performed in order to prevent unnecessary switches in current operating systems. Switching is completely unnecessary in a language-based protection system, as all code can safely operate in the same address space.
  • Flexibility. Any protection scheme that can be designed to be expressed via a programming language can be implemented using this method. Changes to the protection scheme (e.g. from a hierarchical system to a capability-based one) do not require new hardware.

Disadvantages include:

  • Longer application start up time. Applications must be verified when they are started to ensure they have been compiled by the correct compiler, or may need recompiling either from source code or from bytecode.
  • Inflexible type systems. On traditional systems, applications frequently perform operations that are not type safe. Such operations cannot be permitted in a language-based protection system, which means that applications may need to be rewritten and may, in some cases, lose performance.

Examples of systems with language-based protection include JX and Microsoft's Singularity.

Process cooperation[edit]

Edsger Dijkstra proved that from a logical point of view, atomiclock and unlock operations operating on binary semaphores are sufficient primitives to express any functionality of process cooperation.[20] However this approach is generally held to be lacking in terms of safety and efficiency, whereas a message passing approach is more flexible.[21] A number of other approaches (either lower- or higher-level) are available as well, with many modern kernels providing support for systems such as shared memory and remote procedure calls.

I/O devices management[edit]

The idea of a kernel where I/O devices are handled uniformly with other processes, as parallel co-operating processes, was first proposed and implemented by Brinch Hansen (although similar ideas were suggested in 1967[22][23]). In Hansen's description of this, the 'common' processes are called internal processes, while the I/O devices are called external processes.[21]

Similar to physical memory, allowing applications direct access to controller ports and registers can cause the controller to malfunction, or system to crash. With this, depending on the complexity of the device, some devices can get surprisingly complex to program, and use several different controllers. Because of this, providing a more abstract interface to manage the device is important. This interface is normally done by a Device Driver or Hardware Abstraction Layer. Frequently, applications will require access to these devices. The Kernel must maintain the list of these devices by querying the system for them in some way. This can be done through the BIOS, or through one of the various system buses (such as PCI/PCIE, or USB). When an application requests an operation on a device (Such as displaying a character), the kernel needs to send this request to the current active video driver. The video driver, in turn, needs to carry out this request. This is an example of Inter Process Communication (IPC).

Kernel-wide design approaches[edit]

Naturally, the above listed tasks and features can be provided in many ways that differ from each other in design and implementation.

The principle of separation of mechanism and policy is the substantial difference between the philosophy of micro and monolithic kernels.[24][25] Here a mechanism is the support that allows the implementation of many different policies, while a policy is a particular 'mode of operation'. For instance, a mechanism may provide for user log-in attempts to call an authorization server to determine whether access should be granted; a policy may be for the authorization server to request a password and check it against an encrypted password stored in a database. Because the mechanism is generic, the policy could more easily be changed (e.g. by requiring the use of a security token) than if the mechanism and policy were integrated in the same module.

Linux Cdc Acm Driver

In minimal microkernel just some very basic policies are included,[25] and its mechanisms allows what is running on top of the kernel (the remaining part of the operating system and the other applications) to decide which policies to adopt (as memory management, high level process scheduling, file system management, etc.).[4][21] A monolithic kernel instead tends to include many policies, therefore restricting the rest of the system to rely on them.

Per Brinch Hansen presented arguments in favour of separation of mechanism and policy.[4][21] The failure to properly fulfill this separation is one of the major causes of the lack of substantial innovation in existing operating systems,[4] a problem common in computer architecture.[26][27][28] The monolithic design is induced by the 'kernel mode'/'user mode' architectural approach to protection (technically called hierarchical protection domains), which is common in conventional commercial systems;[29] in fact, every module needing protection is therefore preferably included into the kernel.[29] This link between monolithic design and 'privileged mode' can be reconducted to the key issue of mechanism-policy separation;[4] in fact the 'privileged mode' architectural approach melds together the protection mechanism with the security policies, while the major alternative architectural approach, capability-based addressing, clearly distinguishes between the two, leading naturally to a microkernel design[4] (see Separation of protection and security).

While monolithic kernels execute all of their code in the same address space (kernel space), microkernels try to run most of their services in user space, aiming to improve maintainability and modularity of the codebase.[3] Most kernels do not fit exactly into one of these categories, but are rather found in between these two designs. These are called hybrid kernels. More exotic designs such as nanokernels and exokernels are available, but are seldom used for production systems. The Xen hypervisor, for example, is an exokernel.

Monolithic kernels[edit]

Usb Cdc Acm Driver

Diagram of a monolithic kernel

In a monolithic kernel, all OS services run along with the main kernel thread, thus also residing in the same memory area. This approach provides rich and powerful hardware access. Some developers, such as UNIX developer Ken Thompson, maintain that it is 'easier to implement a monolithic kernel'[30] than microkernels. The main disadvantages of monolithic kernels are the dependencies between system components – a bug in a device driver might crash the entire system – and the fact that large kernels can become very difficult to maintain.

Monolithic kernels, which have traditionally been used by Unix-like operating systems, contain all the operating system core functions and the device drivers. This is the traditional design of UNIX systems. A monolithic kernel is one single program that contains all of the code necessary to perform every kernel-related task. Every part which is to be accessed by most programs which cannot be put in a library is in the kernel space: Device drivers, Scheduler, Memory handling, File systems, Network stacks. Many system calls are provided to applications, to allow them to access all those services. A monolithic kernel, while initially loaded with subsystems that may not be needed, can be tuned to a point where it is as fast as or faster than the one that was specifically designed for the hardware, although more relevant in a general sense. Modern monolithic kernels, such as those of Linux and FreeBSD, both of which fall into the category of Unix-like operating systems, feature the ability to load modules at runtime, thereby allowing easy extension of the kernel's capabilities as required, while helping to minimize the amount of code running in kernel space. In the monolithic kernel, some advantages hinge on these points:

  • Since there is less software involved it is faster.
  • As it is one single piece of software it should be smaller both in source and compiled forms.
  • Less code generally means fewer bugs which can translate to fewer security problems.

Most work in the monolithic kernel is done via system calls. These are interfaces, usually kept in a tabular structure, that access some subsystem within the kernel such as disk operations. Essentially calls are made within programs and a checked copy of the request is passed through the system call. Hence, not far to travel at all. The monolithic Linux kernel can be made extremely small not only because of its ability to dynamically load modules but also because of its ease of customization. In fact, there are some versions that are small enough to fit together with a large number of utilities and other programs on asingle floppy disk and still provide a fully functional operating system (one of the most popular of which is muLinux). This ability to miniaturize its kernel has also led to a rapid growth in the use of Linux in embedded systems.

These types of kernels consist of the core functions of the operating system and the device drivers with the ability to load modules at runtime. They provide rich and powerful abstractions of the underlying hardware. They provide a small set of simple hardware abstractions and use applications called servers to provide more functionality. This particular approach defines a high-level virtual interface over the hardware, with a set of system calls to implement operating system services such as process management, concurrency and memory management in several modules that run in supervisor mode.This design has several flaws and limitations:

  • Coding in kernel can be challenging, in part because one cannot use common libraries (like a full-featured libc), and because one needs to use a source-level debugger like gdb. Rebooting the computer is often required. This is not just a problem of convenience to the developers. When debugging is harder, and as difficulties become stronger, it becomes more likely that code will be 'buggier'.
  • Bugs in one part of the kernel have strong side effects; since every function in the kernel has all the privileges, a bug in one function can corrupt data structure of another, totally unrelated part of the kernel, or of any running program.
  • Kernels often become very large and difficult to maintain.
  • Even if the modules servicing these operations are separate from the whole, the code integration is tight and difficult to do correctly.
  • Since the modules run in the same address space, a bug can bring down the entire system.
  • Monolithic kernels are not portable; therefore, they must be rewritten for each new architecture that the operating system is to be used on.
In the microkernel approach, the kernel itself only provides basic functionality that allows the execution of servers, separate programs that assume former kernel functions, such as device drivers, GUI servers, etc.

Microkernels[edit]

Microkernel (also abbreviated μK or uK) is the term describing an approach to operating system design by which the functionality of the system is moved out of the traditional 'kernel', into a set of 'servers' that communicate through a 'minimal' kernel, leaving as little as possible in 'system space' and as much as possible in 'user space'. A microkernel that is designed for a specific platform or device is only ever going to have what it needs to operate. The microkernel approach consists of defining a simple abstraction over the hardware, with a set of primitives or system calls to implement minimal OS services such as memory management, multitasking, and inter-process communication. Other services, including those normally provided by the kernel, such as networking, are implemented in user-space programs, referred to as servers. Microkernels are easier to maintain than monolithic kernels, but the large number of system calls and context switches might slow down the system because they typically generate more overhead than plain function calls.

Only parts which really require being in a privileged mode are in kernel space: IPC (Inter-Process Communication), basic scheduler, or scheduling primitives, basic memory handling, basic I/O primitives. Many critical parts are now running in user space: The complete scheduler, memory handling, file systems, and network stacks. Micro kernels were invented as a reaction to traditional 'monolithic' kernel design, whereby all system functionality was put in a one static program running in a special 'system' mode of the processor. In the microkernel, only the most fundamental of tasks are performed such as being able to access some (not necessarily all) of the hardware, manage memory and coordinate message passing between the processes. Some systems that use micro kernels are QNX and the HURD. In the case of QNX and Hurd user sessions can be entire snapshots of the system itself or views as it is referred to. The very essence of the microkernel architecture illustrates some of its advantages:

  • Maintenance is generally easier.
  • Patches can be tested in a separate instance, and then swapped in to take over a production instance.
  • Rapid development time and new software can be tested without having to reboot the kernel.
  • More persistence in general, if one instance goes hay-wire, it is often possible to substitute it with an operational mirror.

Most micro kernels use a message passing system of some sort to handle requests from one server to another. The message passing system generally operates on a port basis with the microkernel. As an example, if a request for more memory is sent, a port is opened with the microkernel and the request sent through. Once within the microkernel, the steps are similar to system calls. The rationale was that it would bring modularity in the system architecture, which would entail a cleaner system, easier to debug or dynamically modify, customizable to users' needs, and more performing. They are part of the operating systems like GNU Hurd, MINIX, MkLinux, QNX and Redox OS. Although micro kernels are very small by themselves, in combination with all their required auxiliary code they are, in fact, often larger than monolithic kernels. Advocates of monolithic kernels also point out that the two-tiered structure of microkernel systems, in which most of the operating system does not interact directly with the hardware, creates a not-insignificant cost in terms of system efficiency. These types of kernels normally provide only the minimal services such as defining memory address spaces, Inter-process communication (IPC) and the process management. The other functions such as running the hardware processes are not handled directly by micro kernels. Proponents of micro kernels point out those monolithic kernels have the disadvantage that an error in the kernel can cause the entire system to crash. However, with a microkernel, if a kernel process crashes, it is still possible to prevent a crash of the system as a whole by merely restarting the service that caused the error.

Other services provided by the kernel such as networking are implemented in user-space programs referred to as servers. Servers allow the operating system to be modified by simply starting and stopping programs. For a machine without networking support, for instance, the networking server is not started. The task of moving in and out of the kernel to move data between the various applications and servers creates overhead which is detrimental to the efficiency of micro kernels in comparison with monolithic kernels.

Disadvantages in the microkernel exist however. Some are:

  • Larger running memory footprint
  • More software for interfacing is required, there is a potential for performance loss.
  • Messaging bugs can be harder to fix due to the longer trip they have to take versus the one off copy in a monolithic kernel.
  • Process management in general can be very complicated.

The disadvantages for micro kernels are extremely context based. As an example, they work well for small single purpose (and critical) systems because if not many processes need to run, then the complications of process management are effectively mitigated.

A microkernel allows the implementation of the remaining part of the operating system as a normal application program written in a high-level language, and the use of different operating systems on top of the same unchanged kernel.[21] It is also possible to dynamically switch among operating systems and to have more than one active simultaneously.[21]

Monolithic kernels vs. microkernels[edit]

As the computer kernel grows, so grows the size and vulnerability of its trusted computing base; and, besides reducing security, there is the problem of enlarging the memory footprint. This is mitigated to some degree by perfecting the virtual memory system, but not all computer architectures have virtual memory support.[31] To reduce the kernel's footprint, extensive editing has to be performed to carefully remove unneeded code, which can be very difficult with non-obvious interdependencies between parts of a kernel with millions of lines of code.

By the early 1990s, due to the various shortcomings of monolithic kernels versus microkernels, monolithic kernels were considered obsolete by virtually all operating system researchers.[citation needed] As a result, the design of Linux as a monolithic kernel rather than a microkernel was the topic of a famous debate between Linus Torvalds and Andrew Tanenbaum.[32] There is merit on both sides of the argument presented in the Tanenbaum–Torvalds debate.

Performance[edit]

Monolithic kernels are designed to have all of their code in the same address space (kernel space), which some developers argue is necessary to increase the performance of the system.[33] Some developers also maintain that monolithic systems are extremely efficient if well written.[33] The monolithic model tends to be more efficient[34] through the use of shared kernel memory, rather than the slower IPC system of microkernel designs, which is typically based on message passing.[citation needed]

The performance of microkernels was poor in both the 1980s and early 1990s.[35][36] However, studies that empirically measured the performance of these microkernels did not analyze the reasons of such inefficiency.[35] The explanations of this data were left to 'folklore', with the assumption that they were due to the increased frequency of switches from 'kernel-mode' to 'user-mode',[35] to the increased frequency of inter-process communication[35] and to the increased frequency of context switches.[35]

In fact, as guessed in 1995, the reasons for the poor performance of microkernels might as well have been: (1) an actual inefficiency of the whole microkernel approach, (2) the particular concepts implemented in those microkernels, and (3) the particular implementation of those concepts.[35] Therefore it remained to be studied if the solution to build an efficient microkernel was, unlike previous attempts, to apply the correct construction techniques.[35]

On the other end, the hierarchical protection domains architecture that leads to the design of a monolithic kernel[29] has a significant performance drawback each time there's an interaction between different levels of protection (i.e. when a process has to manipulate a data structure both in 'user mode' and 'supervisor mode'), since this requires message copying by value.[37]

By the mid-1990s, most researchers had abandoned the belief that careful tuning could reduce this overhead dramatically,[citation needed] but recently, newer microkernels, optimized for performance, such as L4[38] and K42 have addressed these problems.[verification needed]

The hybrid kernel approach combines the speed and simpler design of a monolithic kernel with the modularity and execution safety of a microkernel.

Hybrid (or modular) kernels[edit]

Hybrid kernels are used in most commercial operating systems such as Microsoft Windows NT 3.1, NT 3.5, NT 3.51, NT 4.0, 2000, XP, Vista, 7, 8, 8.1 and 10. Apple Inc's own macOS uses a hybrid kernel called XNU which is based upon code from OSF/1's Mach kernel (OSFMK 7.3)[39] and FreeBSD's monolithic kernel. They are similar to micro kernels, except they include some additional code in kernel-space to increase performance. These kernels represent a compromise that was implemented by some developers before it was demonstrated that pure micro kernels can provide high performance. These types of kernels are extensions of micro kernels with some properties of monolithic kernels. Unlike monolithic kernels, these types of kernels are unable to load modules at runtime on their own. Hybrid kernels are micro kernels that have some 'non-essential' code in kernel-space in order for the code to run more quickly than it would were it to be in user-space. Hybrid kernels are a compromise between the monolithic and microkernel designs. This implies running some services (such as the network stack or the filesystem) in kernel space to reduce the performance overhead of a traditional microkernel, but still running kernel code (such as device drivers) as servers in user space.

Many traditionally monolithic kernels are now at least adding (if not actively exploiting) the module capability. The most well known of these kernels is the Linux kernel. The modular kernel essentially can have parts of it that are built into the core kernel binary or binaries that load into memory on demand. It is important to note that a code tainted module has the potential to destabilize a running kernel. Many people become confused on this point when discussing micro kernels. It is possible to write a driver for a microkernel in a completely separate memory space and test it before 'going' live. When a kernel module is loaded, it accesses the monolithic portion's memory space by adding to it what it needs, therefore, opening the doorway to possible pollution. A few advantages to the modular (or) Hybrid kernel are:

Windows 10 Usb Cdc Driver

  • Faster development time for drivers that can operate from within modules. No reboot required for testing (provided the kernel is not destabilized).
  • On demand capability versus spending time recompiling a whole kernel for things like new drivers or subsystems.
  • Faster integration of third party technology (related to development but pertinent unto itself nonetheless).

Modules, generally, communicate with the kernel using a module interface of some sort. The interface is generalized (although particular to a given operating system) so it is not always possible to use modules. Often the device drivers may need more flexibility than the module interface affords. Essentially, it is two system calls and often the safety checks that only have to be done once in the monolithic kernel now may be done twice. Some of the disadvantages of the modular approach are:

  • With more interfaces to pass through, the possibility of increased bugs exists (which implies more security holes).
  • Maintaining modules can be confusing for some administrators when dealing with problems like symbol differences.

Nanokernels[edit]

A nanokernel delegates virtually all services – including even the most basic ones like interrupt controllers or the timer – to device drivers to make the kernel memory requirement even smaller than a traditional microkernel.[40]

Exokernels[edit]

Exokernels are a still-experimental approach to operating system design. They differ from the other types of kernels in that their functionality is limited to the protection and multiplexing of the raw hardware, providing no hardware abstractions on top of which to develop applications. This separation of hardware protection from hardware management enables application developers to determine how to make the most efficient use of the available hardware for each specific program.

Exokernels in themselves are extremely small. However, they are accompanied by library operating systems (see also unikernel), providing application developers with the functionalities of a conventional operating system. A major advantage of exokernel-based systems is that they can incorporate multiple library operating systems, each exporting a different API, for example one for high level UI development and one for real-time control.

History of kernel development[edit]

Early operating system kernels[edit]

Strictly speaking, an operating system (and thus, a kernel) is not required to run a computer. Programs can be directly loaded and executed on the 'bare metal' machine, provided that the authors of those programs are willing to work without any hardware abstraction or operating system support. Most early computers operated this way during the 1950s and early 1960s, which were reset and reloaded between the execution of different programs. Eventually, small ancillary programs such as program loaders and debuggers were left in memory between runs, or loaded from ROM. As these were developed, they formed the basis of what became early operating system kernels. The 'bare metal' approach is still used today on some video game consoles and embedded systems,[41] but in general, newer computers use modern operating systems and kernels.

In 1969, the RC 4000 Multiprogramming System introduced the system design philosophy of a small nucleus 'upon which operating systems for different purposes could be built in an orderly manner',[42] what would be called the microkernel approach.

Time-sharing operating systems[edit]

In the decade preceding Unix, computers had grown enormously in power – to the point where computer operators were looking for new ways to get people to use their spare time on their machines. One of the major developments during this era was time-sharing, whereby a number of users would get small slices of computer time, at a rate at which it appeared they were each connected to their own, slower, machine.[43]

The development of time-sharing systems led to a number of problems. One was that users, particularly at universities where the systems were being developed, seemed to want to hack the system to get more CPU time. For this reason, security and access control became a major focus of the Multics project in 1965.[44] Another ongoing issue was properly handling computing resources: users spent most of their time staring at the terminal and thinking about what to input instead of actually using the resources of the computer, and a time-sharing system should give the CPU time to an active user during these periods. Finally, the systems typically offered a memory hierarchy several layers deep, and partitioning this expensive resource led to major developments in virtual memory systems.

Amiga[edit]

The CommodoreAmiga was released in 1985, and was among the first – and certainly most successful – home computers to feature an advanced kernel architecture. The AmigaOS kernel's executive component, exec.library, uses a microkernel message-passing design, but there are other kernel components, like graphics.library, that have direct access to the hardware. There is no memory protection, and the kernel is almost always running in user mode. Only special actions are executed in kernel mode, and user-mode applications can ask the operating system to execute their code in kernel mode.

Unix[edit]

A diagram of the predecessor/successor family relationship for Unix-like systems

During the design phase of Unix, programmers decided to model every high-level device as a file, because they believed the purpose of computation was data transformation.[45]

Windows Cdc Acm Driver For Mac

For instance, printers were represented as a 'file' at a known location – when data was copied to the file, it printed out. Other systems, to provide a similar functionality, tended to virtualize devices at a lower level – that is, both devices and files would be instances of some lower level concept. Virtualizing the system at the file level allowed users to manipulate the entire system using their existing file management utilities and concepts, dramatically simplifying operation. As an extension of the same paradigm, Unix allows programmers to manipulate files using a series of small programs, using the concept of pipes, which allowed users to complete operations in stages, feeding a file through a chain of single-purpose tools. Although the end result was the same, using smaller programs in this way dramatically increased flexibility as well as ease of development and use, allowing the user to modify their workflow by adding or removing a program from the chain.

In the Unix model, the operating system consists of two parts: first, the huge collection of utility programs that drive most operations; second, the kernel that runs the programs.[45] Under Unix, from a programming standpoint, the distinction between the two is fairly thin; the kernel is a program, running in supervisor mode,[46] that acts as a program loader and supervisor for the small utility programs making up the rest of the system, and to provide locking and I/O services for these programs; beyond that, the kernel didn't intervene at all in user space.

Over the years the computing model changed, and Unix's treatment of everything as a file or byte stream no longer was as universally applicable as it was before. Although a terminal could be treated as a file or a byte stream, which is printed to or read from, the same did not seem to be true for a graphical user interface. Networking posed another problem. Even if network communication can be compared to file access, the low-level packet-oriented architecture dealt with discrete chunks of data and not with whole files. As the capability of computers grew, Unix became increasingly cluttered with code. It is also because the modularity of the Unix kernel is extensively scalable.[47] While kernels might have had 100,000 lines of code in the seventies and eighties, kernels of modern Unix successors like Linux have more than 13 million lines.[48]

Modern Unix-derivatives are generally based on module-loading monolithic kernels. Examples of this are the Linux kernel in its many distributions as well as the Berkeley Software Distribution variant kernels such as FreeBSD, DragonflyBSD, OpenBSD, NetBSD, and macOS. Apart from these alternatives, amateur developers maintain an active operating system development community, populated by self-written hobby kernels which mostly end up sharing many features with Linux, FreeBSD, DragonflyBSD, OpenBSD or NetBSD kernels and/or being compatible with them.[49]

Mac OS[edit]

Apple first launched its classic Mac OS in 1984, bundled with its Macintoshpersonal computer. Apple moved to a nanokernel design in Mac OS 8.6. Against this, the modern macOS (originally named Mac OS X) is based on Darwin, which uses a hybrid kernel called XNU, which was created by combining the 4.3BSD kernel and the Mach kernel.[50]

Microsoft Windows[edit]

Microsoft Windows was first released in 1985 as an add-on to MS-DOS. Because of its dependence on another operating system, initial releases of Windows, prior to Windows 95, were considered an operating environment (not to be confused with an operating system). This product line continued to evolve through the 1980s and 1990s, with the Windows 9x series adding 32-bit addressing and pre-emptive multitasking; but ended with the release of Windows Me in 2000.

Cdc Acm Driver

Microsoft also developed Windows NT, an operating system with a very similar interface, but intended for high-end and business users. This line started with the release of Windows NT 3.1 in 1993, and was introduced to general users with the release of Windows XP in October 2001—replacing Windows 9x with a completely different, much more sophisticated operating system. This is the line that continues with Windows 10.

The architecture of Windows NT's kernel is considered a hybrid kernel because the kernel itself contains tasks such as the Window Manager and the IPC Managers, with a client/server layered subsystem model.[51]

IBM Supervisor[edit]

Supervisory program or supervisor is a computer program, usually part of an operating system, that controls the execution of other routines and regulates work scheduling, input/output operations, error actions, and similar functions and regulates the flow of work in a data processing system.

Historically, this term was essentially associated with IBM's line of mainframe operating systems starting with OS/360. In other operating systems, the supervisor is generally called the kernel.

In the 1970s, IBM further abstracted the supervisor state from the hardware, resulting in a hypervisor that enabled full virtualization, i.e. the capacity to run multiple operating systems on the same machine totally independently from each other. Hence the first such system was called Virtual Machine or VM.

Usb Cdc Acm

Development of microkernels[edit]

Although Mach, developed at Carnegie Mellon University from 1985 to 1994, is the best-known general-purpose microkernel, other microkernels have been developed with more specific aims. The L4 microkernel family (mainly the L3 and the L4 kernel) was created to demonstrate that microkernels are not necessarily slow.[38] Newer implementations such as Fiasco and Pistachio are able to run Linux next to other L4 processes in separate address spaces.[52][53]

Additionally, QNX is a microkernel which is principally used in embedded systems,[54] and the open-source softwareMINIX, while originally created for educational purposes, is now focussed on being a highly reliable and self-healing microkernel OS.

Cdc Driver For Windows 10

See also[edit]

Notes[edit]

  1. ^ ab'Kernel'. Linfo. Bellevue Linux Users Group. Retrieved 15 September 2016.
  2. ^cf. Daemon (computing)
  3. ^ abRoch 2004
  4. ^ abcdefghWulf 1974 pp.337–345
  5. ^ abSilberschatz 1991
  6. ^Tanenbaum, Andrew S. (2008). Modern Operating Systems (3rd ed.). Prentice Hall. pp. 50–51. ISBN978-0-13-600663-3. . . . nearly all system calls [are] invoked from C programs by calling a library procedure . . . The library procedure . . . executes a TRAP instruction to switch from user mode to kernel mode and start execution . . .
  7. ^Denning 1976
  8. ^Swift 2005, p.29 quote: 'isolation, resource control, decision verification (checking), and error recovery.'
  9. ^Schroeder 72
  10. ^ abLinden 76
  11. ^Stephane Eranian and David Mosberger, Virtual Memory in the IA-64 Linux Kernel, Prentice Hall PTR, 2002
  12. ^Silberschatz & Galvin, Operating System Concepts, 4th ed, pp. 445 & 446
  13. ^Hoch, Charles; J. C. Browne (July 1980). 'An implementation of capabilities on the PDP-11/45'. ACM SIGOPS Operating Systems Review. 14 (3): 22–32. doi:10.1145/850697.850701.
  14. ^ abA Language-Based Approach to Security, Schneider F., Morrissett G. (Cornell University) and Harper R. (Carnegie Mellon University)
  15. ^ abcP. A. Loscocco, S. D. Smalley, P. A. Muckelbauer, R. C. Taylor, S. J. Turner, and J. F. Farrell. The Inevitability of Failure: The Flawed Assumption of Security in Modern Computing EnvironmentsArchived 2007-06-21 at the Wayback Machine. In Proceedings of the 21st National Information Systems Security Conference, pages 303–314, Oct. 1998. [1].
  16. ^J. Lepreau et al. The Persistent Relevance of the Local Operating System to Global Applications. Proceedings of the 7th ACM SIGOPS Eurcshelf/book001/book001.html Information Security: An Integrated Collection of Essays, IEEE Comp. 1995.
  17. ^J. Anderson, Computer Security Technology Planning StudyArchived 2011-07-21 at the Wayback Machine, Air Force Elect. Systems Div., ESD-TR-73-51, October 1972.
  18. ^* Jerry H. Saltzer; Mike D. Schroeder (September 1975). 'The protection of information in computer systems'. Proceedings of the IEEE. 63 (9): 1278–1308. CiteSeerX10.1.1.126.9257. doi:10.1109/PROC.1975.9939.
  19. ^Jonathan S. Shapiro; Jonathan M. Smith; David J. Farber (1999). 'EROS: a fast capability system'. Proceedings of the Seventeenth ACM Symposium on Operating Systems Principles. 33 (5): 170–185. doi:10.1145/319344.319163.
  20. ^Dijkstra, E. W. Cooperating Sequential Processes. Math. Dep., Technological U., Eindhoven, Sept. 1965.
  21. ^ abcdefBrinch Hansen 70 pp.238–241
  22. ^'SHARER, a time sharing system for the CDC 6600'. Retrieved 2007-01-07.Cite web requires website= (help)
  23. ^'Dynamic Supervisors – their design and construction'. Retrieved 2007-01-07.Cite web requires website= (help)
  24. ^Baiardi 1988
  25. ^ abLevin 75
  26. ^Denning 1980
  27. ^Jürgen Nehmer The Immortality of Operating Systems, or: Is Research in Operating Systems still Justified? Lecture Notes In Computer Science; Vol. 563. Proceedings of the International Workshop on Operating Systems of the 90s and Beyond. pp. 77–83 (1991) ISBN3-540-54987-0[2] quote: 'The past 25 years have shown that research on operating system architecture had a minor effect on existing main stream [sic] systems.'
  28. ^Levy 84, p.1 quote: 'Although the complexity of computer applications increases yearly, the underlying hardware architecture for applications has remained unchanged for decades.'
  29. ^ abcLevy 84, p.1 quote: 'Conventional architectures support a single privileged mode ofoperation. This structure leads to monolithic design; any module needing protection must be part of the single operating system kernel. If, instead, any module could execute within a protected domain, systems could be built as a collection of independent modules extensible by any user.'
  30. ^'Open Sources: Voices from the Open Source Revolution'. 1-56592-582-3. 29 March 1999.
  31. ^Virtual addressing is most commonly achieved through a built-in memory management unit.
  32. ^Recordings of the debate between Torvalds and Tanenbaum can be found at dina.dkArchived 2012-10-03 at the Wayback Machine, groups.google.com, oreilly.com and Andrew Tanenbaum's website
  33. ^ abMatthew Russell. 'What Is Darwin (and How It Powers Mac OS X)'. O'Reilly Media.Cite web requires website= (help) quote: 'The tightly coupled nature of a monolithic kernel allows it to make very efficient use of the underlying hardware [..] Microkernels, on the other hand, run a lot more of the core processes in userland. [..] Unfortunately, these benefits come at the cost of the microkernel having to pass a lot of information in and out of the kernel space through a process known as a context switch. Context switches introduce considerable overhead and therefore result in a performance penalty.'
  34. ^'Operating Systems/Kernel Models - Wikiversity'. en.wikiversity.org.
  35. ^ abcdefgLiedtke 95
  36. ^Härtig 97
  37. ^Hansen 73, section 7.3 p.233 'interactions between different levels of protection require transmission of messages by value'
  38. ^ ab'The L4 microkernel family - Overview'. os.inf.tu-dresden.de.
  39. ^Apple WWDC Videos (19 February 2017). 'Apple WWDC 2000 Session 106 - Mac OS X: Kernel' – via YouTube.Cite web requires website= (help)
  40. ^KeyKOS Nanokernel ArchitectureArchived 2011-06-21 at the Wayback Machine
  41. ^Ball: Embedded Microprocessor Designs, p. 129
  42. ^Hansen 2001 (os), pp.17–18
  43. ^'BSTJ version of C.ACM Unix paper'. bell-labs.com.
  44. ^Introduction and Overview of the Multics System, by F. J. Corbató and V. A. Vissotsky.
  45. ^ ab'The Single Unix Specification'. The open group. Archived from the original on 2016-10-04. Retrieved 2016-09-29.Cite uses deprecated parameter dead-url= (help)
  46. ^The highest privilege level has various names throughout different architectures, such as supervisor mode, kernel mode, CPL0, DPL0, ring 0, etc. See Ring (computer security) for more information.
  47. ^'Unix's Revenge'. asymco.com. 29 September 2010.
  48. ^Linux Kernel 2.6: It's Worth More!, by David A. Wheeler, October 12, 2004
  49. ^This community mostly gathers at Bona Fide OS Development, The Mega-Tokyo Message Board and other operating system enthusiast web sites.
  50. ^XNU: The KernelArchived 2011-08-12 at the Wayback Machine
  51. ^'Windows - Official Site for Microsoft Windows 10 Home & Pro OS, laptops, PCs, tablets & more'. windows.com.
  52. ^'The Fiasco microkernel - Overview'. os.inf.tu-dresden.de.
  53. ^Zoller (inaktiv), Heinz (7 December 2013). 'L4Ka - L4Ka Project'. www.l4ka.org.
  54. ^'QNX Operating Systems'. blackberry.qnx.com.

References[edit]

  • Roch, Benjamin (2004). 'Monolithic kernel vs. Microkernel'(PDF). Archived from the original(PDF) on 2006-11-01. Retrieved 2006-10-12.Cite uses deprecated parameter deadurl= (help); Cite web requires website= (help)
  • Silberschatz, Abraham; James L. Peterson; Peter B. Galvin (1991). Operating system concepts. Boston, Massachusetts: Addison-Wesley. p. 696. ISBN978-0-201-51379-0.
  • Ball, Stuart R. (2002) [2002]. Embedded Microprocessor Systems: Real World Designs (first ed.). Elsevier Science. ISBN978-0-7506-7534-5.
  • Deitel, Harvey M. (1984) [1982]. An introduction to operating systems (revisited first ed.). Addison-Wesley. p. 673. ISBN978-0-201-14502-1.
  • Denning, Peter J. (December 1976). 'Fault tolerant operating systems'. ACM Computing Surveys. 8 (4): 359–389. doi:10.1145/356678.356680. ISSN0360-0300.
  • Denning, Peter J. (April 1980). 'Why not innovations in computer architecture?'. ACM SIGARCH Computer Architecture News. 8 (2): 4–7. doi:10.1145/859504.859506. ISSN0163-5964.
  • Hansen, Per Brinch (April 1970). 'The nucleus of a Multiprogramming System'. Communications of the ACM. 13 (4): 238–241. CiteSeerX10.1.1.105.4204. doi:10.1145/362258.362278. ISSN0001-0782.
  • Hansen, Per Brinch (1973). Operating System Principles. Englewood Cliffs: Prentice Hall. p. 496. ISBN978-0-13-637843-3.
  • Hansen, Per Brinch (2001). 'The evolution of operating systems'(PDF). Retrieved 2006-10-24.Cite journal requires journal= (help) included in book: Per Brinch Hansen, ed. (2001). '1'(PDF). Classic operating systems: from batch processing to distributed systems. New York: Springer-Verlag. pp. 1–36. ISBN978-0-387-95113-3.
  • Hermann Härtig, Michael Hohmuth, Jochen Liedtke, Sebastian Schönberg, Jean Wolter The performance of μ-kernel-based systems, Härtig, Hermann; Hohmuth, Michael; Liedtke, Jochen; Schönberg, Sebastian (1997). 'The performance of μ-kernel-based systems'. Proceedings of the sixteenth ACM symposium on Operating systems principles - SOSP '97. p. 66. CiteSeerX10.1.1.56.3314. doi:10.1145/268998.266660. ISBN978-0897919166. ACM SIGOPS Operating Systems Review, v.31 n.5, p. 66–77, Dec. 1997
  • Houdek, M. E., Soltis, F. G., and Hoffman, R. L. 1981. IBM System/38 support for capability-based addressing. In Proceedings of the 8th ACM International Symposium on Computer Architecture. ACM/IEEE, pp. 341–348.
  • Intel Corporation (2002) The IA-32 Architecture Software Developer's Manual, Volume 1: Basic Architecture
  • Levin, R.; Cohen, E.; Corwin, W.; Pollack, F.; Wulf, William (1975). 'Policy/mechanism separation in Hydra'. ACM Symposium on Operating Systems Principles / Proceedings of the Fifth ACM Symposium on Operating Systems Principles. 9 (5): 132–140. doi:10.1145/1067629.806531.
  • Levy, Henry M. (1984). Capability-based computer systems. Maynard, Mass: Digital Press. ISBN978-0-932376-22-0.
  • Liedtke, Jochen. On µ-Kernel Construction, Proc. 15th ACM Symposium on Operating System Principles (SOSP), December 1995
  • Linden, Theodore A. (December 1976). 'Operating System Structures to Support Security and Reliable Software'. ACM Computing Surveys. 8 (4): 409–445. doi:10.1145/356678.356682. ISSN0360-0300., 'Operating System Structures to Support Security and Reliable Software'(PDF). Retrieved 2010-06-19.Cite web requires website= (help)
  • Lorin, Harold (1981). Operating systems. Boston, Massachusetts: Addison-Wesley. pp. 161–186. ISBN978-0-201-14464-2.
  • Schroeder, Michael D.; Jerome H. Saltzer (March 1972). 'A hardware architecture for implementing protection rings'. Communications of the ACM. 15 (3): 157–170. doi:10.1145/361268.361275. ISSN0001-0782.
  • Shaw, Alan C. (1974). The logical design of Operating systems. Prentice-Hall. p. 304. ISBN978-0-13-540112-5.
  • Tanenbaum, Andrew S. (1979). Structured Computer Organization. Englewood Cliffs, New Jersey: Prentice-Hall. ISBN978-0-13-148521-1.
  • Wulf, W.; E. Cohen; W. Corwin; A. Jones; R. Levin; C. Pierson; F. Pollack (June 1974). 'HYDRA: the kernel of a multiprocessor operating system'(PDF). Communications of the ACM. 17 (6): 337–345. doi:10.1145/355616.364017. ISSN0001-0782. Archived from the original(PDF) on 2007-09-26. Retrieved 2007-07-18.Cite uses deprecated parameter dead-url= (help)
  • Baiardi, F.; A. Tomasi; M. Vanneschi (1988). Architettura dei Sistemi di Elaborazione, volume 1 (in Italian). Franco Angeli. ISBN978-88-204-2746-7.
  • Swift, Michael M.; Brian N. Bershad; Henry M. Levy. Improving the reliability of commodity operating systems(PDF).
  • Gettys, James; Karlton, Philip L.; McGregor, Scott (1990). 'Improving the reliability of commodity operating systems'. Software: Practice and Experience. 20: S35–S67. doi:10.1002/spe.4380201404. Retrieved 2010-06-19.
  • 'ACM Transactions on Computer Systems (TOCS), v.23 n.1, p. 77–110, February 2005'.Cite web requires website= (help); Missing or empty url= (help)

Further reading[edit]

  • Andrew Tanenbaum, Operating Systems – Design and Implementation (Third edition);
  • Andrew Tanenbaum, Modern Operating Systems (Second edition);
  • Daniel P. Bovet, Marco Cesati, The Linux Kernel;
  • David A. Peterson, Nitin Indurkhya, Patterson, Computer Organization and Design, Morgan Koffman(ISBN1-55860-428-6);
  • B.S. Chalk, Computer Organisation and Architecture, Macmillan P.(ISBN0-333-64551-0).

External links[edit]

Wikiversity has learning resources about Kernel Models at
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Kernel_(operating_system)&oldid=913993560'