20
ACPI in Linux Architecture, Advances, and Challenges Len Brown Anil Keshavamurthy David Shaohua Li Robert Moore Venkatesh Pallipadi Luming Yu Intel Open Source Technology Center {len.brown, anil.s.keshavamurthy, shaohua.li}@intel.com {robert.moore, venkatesh.pallipadi, luming.yu}@intel.com Abstract ACPI (Advanced Configuration and Power In- terface) is an open industry specification es- tablishing industry-standard interfaces for OS- directed configuration and power management on laptops, desktops, and servers. ACPI enables new power management technol- ogy to evolve independently in operating sys- tems and hardware while ensuring that they continue to work together. This paper starts with an overview of the ACPICA architecture. Next a section describes the implementation architecture in Linux. Later sections detail recent advances and cur- rent challenges in Linux/ACPI processor power management, CPU and memory hot-plug, legacy plug-and-play configuration, and hot- keys. 1 ACPI Component Architecture The purpose of ACPICA, the ACPI Component Architecture, is to simplify ACPI implementa- tions for operating system vendors (OSVs) by providing major portions of an ACPI imple- mentation in OS-independent ACPI modules that can be integrated into any operating sys- tem. The ACPICA software can be hosted on any operating system by writing a small and rel- atively simple OS Services Layer (OSL) be- tween the ACPI subsystem and the host oper- ating system. The ACPICA source code is dual-licensed such that Linux can share it with other operating sys- tems, such as FreeBSD. 1.1 ACPICA Overview ACPICA defines and implements a group of software components that together create an implementation of the ACPI specification. A major goal of the architecture is to isolate all operating system dependencies to a relatively small translation or conversion layer (the OS Services Layer) so that the bulk of the ACPICA code is independent of any individual operat- ing system. Therefore, hosting the ACPICA code on new operating systems requires no source code modifications within the CA code 51

Acpi Linux

Embed Size (px)

Citation preview

Page 1: Acpi Linux

ACPI in LinuxArchitecture, Advances, and Challenges

Len Brown Anil Keshavamurthy David Shaohua Li

Robert Moore Venkatesh Pallipadi Luming Yu

Intel Open Source Technology Center{len.brown, anil.s.keshavamurthy, shaohua.li}@intel.com

{robert.moore, venkatesh.pallipadi, luming.yu}@intel.com

Abstract

ACPI (Advanced Configuration and Power In-terface) is an open industry specification es-tablishing industry-standard interfaces for OS-directed configuration and power managementon laptops, desktops, and servers.

ACPI enables new power management technol-ogy to evolve independently in operating sys-tems and hardware while ensuring that theycontinue to work together.

This paper starts with an overview of theACPICA architecture. Next a section describesthe implementation architecture in Linux.

Later sections detail recent advances and cur-rent challenges in Linux/ACPI processor powermanagement, CPU and memory hot-plug,legacy plug-and-play configuration, and hot-keys.

1 ACPI Component Architecture

The purpose of ACPICA, the ACPI ComponentArchitecture, is to simplify ACPI implementa-tions for operating system vendors (OSVs) by

providing major portions of an ACPI imple-mentation in OS-independent ACPI modulesthat can be integrated into any operating sys-tem.

The ACPICA software can be hosted on anyoperating system by writing a small and rel-atively simple OS Services Layer (OSL) be-tween the ACPI subsystem and the host oper-ating system.

The ACPICA source code is dual-licensed suchthat Linux can share it with other operating sys-tems, such as FreeBSD.

1.1 ACPICA Overview

ACPICA defines and implements a group ofsoftware components that together create animplementation of the ACPI specification. Amajor goal of the architecture is to isolate alloperating system dependencies to a relativelysmall translation or conversion layer (the OSServices Layer) so that the bulk of the ACPICAcode is independent of any individual operat-ing system. Therefore, hosting the ACPICAcode on new operating systems requires nosource code modifications within the CA code

• 51 •

Page 2: Acpi Linux

52 • ACPI in Linux

itself. The components of the architecture in-clude (from the top down):

• A user interface to the power managementand configuration features.

• A power management and power policycomponent (OSPM).1

• A configuration management component.

• ACPI-related device drivers (for example,drivers for the Embedded Controller, SM-Bus, Smart Battery, and Control MethodBattery).

• An ACPI Core Subsystem component thatprovides the fundamental ACPI services(such as the AML2 interpreter and names-pace3 management).

• An OS Services Layer for each host oper-ating system.

1.2 The ACPI Subsystem

The ACPI Subsystem implements the low levelor fundamental aspects of the ACPI specifica-tion. It includes an AML parser/interpreter,ACPI namespace management, ACPI table anddevice support, and event handling. Since theACPICA core provides low-level system ser-vices, it also requires low-level operating sys-tem services such as memory management,synchronization, scheduling, and I/O. To allowthe Core Subsystem to easily interface to anyoperating system that provides such services,the OSL translates OS requests into the nativecalls provided by the host operating system.

1OSPM, Operating System directed Power Manage-ment.

2AML, ACPI Machine Language exported by theBIOS in ACPI tables, interpreted by the OS.

3The ACPI namespace tracks devices, objects, andmethods accessed by the interpreter.

Operating System

ACPI Core Subsystem

OS Services Layer

ACPI Subsystem

Figure 1: The ACPI Subsystem Architecture

The OS Services Layer is the only componentof the ACPICA that contains code that is spe-cific to a host operating system. Figure 1 illus-trates the ACPI Subsystem is composed of theOSL and the Core.

The ACPI Core Subsystem supplies the ma-jor building blocks or subcomponents that arerequired for all ACPI implementations includ-ing an AML interpreter, a namespace man-ager, ACPI event and resource management,and ACPI hardware support.

One of the goals of the Core Subsystem is toprovide an abstraction level high enough suchthat the host OS does not need to understandor know about the very low-level ACPI details.For example, all AML code is hidden from theOSL and host operating system. Also, the de-tails of the ACPI hardware are abstracted tohigher-level software interfaces.

The Core Subsystem implementation makes noassumptions about the host operating systemor environment. The only way it can request

Page 3: Acpi Linux

2005 Linux Symposium • 53

Host Operating System

OS ServicesLayer

Core Subsystem

Host/OSInterface

ACPI/OSInterface

ACPI Interface

ACPI Subsystem

Figure 2: Interaction between the ArchitecturalComponents

operating system services is via interfaces pro-vided by the OSL. Figure 2 shows that the OSLcomponent “calls up” to the host operating sys-tem whenever operating system services are re-quired, either for the OSL itself, or on behalfof the Core Subsystem component. All nativecalls directly to the host are confined to the OSServices Layer, allowing the core to remain OSindependent.

1.3 ACPI Core Subsystem

The Core Subsystem is divided into several log-ical modules or sub-components. Each mod-ule implements a service or group of relatedservices. This section describes each sub-component and identifies the classes of externalinterfaces to the components, the mapping ofthese classes to the individual components, andthe interface names. Figure 3 shows the inter-nal modules of the ACPI Core Subsystem andtheir relationship to each other. The AML inter-preter forms the foundation of the component,with additional services built upon this founda-tion.

AML Interpreter

EventManagement

ACPI H/WManagement

ACPI TableManagement

NamespaceManagement

ResourceManagement

Figure 3: Internal Modules of the ACPI CoreSubsystem

1.4 AML Interpreter

The AML interpreter is responsible for theparsing and execution of the AML byte codethat is provided by the computer system ven-dor. The services that the interpreter providesinclude:

• AML Control Method Execution

• Evaluation of Namespace Objects

1.5 ACPI Table Management

This component manages the ACPI tables. Thetables may be loaded from the firmware or di-rectly from a buffer provided by the host oper-ating system. Services include:

• ACPI Table Parsing

• ACPI Table Verification

• ACPI Table installation and removal

1.6 Namespace Management

The Namespace component provides ACPInamespace services on top of the AML inter-preter. It builds and manages the internal ACPInamespace. Services include:

Page 4: Acpi Linux

54 • ACPI in Linux

• Namespace Initialization from either theBIOS or a file

• Device Enumeration

• Namespace Access

• Access to ACPI data and tables

1.7 Resource Management

The Resource component provides resourcequery and configuration services on top of theNamespace manager and AML interpreter. Ser-vices include:

• Getting and Setting Current Resources

• Getting Possible Resources

• Getting IRQ Routing Tables

• Getting Power Dependencies

1.8 ACPI Hardware Management

The hardware manager controls access to theACPI registers, timers, and other ACPI–relatedhardware. Services include:

• ACPI Status register and Enable registeraccess

• ACPI Register access (generic read andwrite)

• Power Management Timer access

• Legacy Mode support

• Global Lock support

• Sleep Transitions support (S-states)

• Processor Power State support (C-states)

• Other hardware integration: Throttling,Processor Performance, etc.

1.9 Event Handling

The Event Handling component manages theACPI System Control Interrupt (SCI). The sin-gle SCI multiplexes the ACPI timer, FixedEvents, and General Purpose Events (GPEs).This component also manages dispatch of no-tification and Address Space/Operation Regionevents. Services include:

• ACPI mode enable/disable

• ACPI event enable/disable

• Fixed Event Handlers (Installation, re-moval, and dispatch)

• General Purpose Event (GPE) Handlers(Installation, removal, and dispatch)

• Notify Handlers (Installation, removal,and dispatch)

• Address Space and Operation RegionHandlers (Installation, removal, and dis-patch)

2 ACPICA OS Services Layer(OSL)

The OS Services Layer component of the archi-tecture enables the re-hosting or re-targeting ofthe other components to execute under differentoperating systems, or to even execute in envi-ronments where there is no host operating sys-tem. In other words, the OSL component pro-vides the glue that joins the other componentsto a particular operating system and/or environ-ment. The OSL implements interfaces and ser-vices using native calls to host OS. Therefore,an OS Services Layer must be written for eachtarget operating system.

The OS Services Layer has several roles.

Page 5: Acpi Linux

2005 Linux Symposium • 55

1. It acts as the front-end for some OS-to-ACPI requests. It translates OS requeststhat are received in the native OS format(such as a system call interface, an I/O re-quest/result segment interface, or a devicedriver interface) into calls to Core Subsys-tem interfaces.

2. It exposes a set of OS-specific applicationinterfaces. These interfaces translate ap-plication requests to calls to the ACPI in-terfaces.

3. The OSL component implements a stan-dard set of interfaces that perform OS de-pendent functions (such as memory allo-cation and hardware access) on behalf ofthe Core Subsystem component. Theseinterfaces are themselves OS-independentbecause they are constant across all OSLimplementations. It is the implemen-tations of these interfaces that are OS-dependent, because they must use the na-tive services and interfaces of the host op-erating system.

2.1 Functional Service Groups

The services provided by the OS ServicesLayer can be categorized into several distinctgroups, mostly based upon when each of theservices in the group are required. There areboot time functions, device load time functions,run time functions, and asynchronous func-tions.

Although it is the OS Services Layer that ex-poses these services to the rest of the operat-ing system, it is very important to note that theOS Services Layer makes use of the services ofthe lower-level ACPI Core Subsystem to imple-ment its services.

2.1.1 OS Boot-load-Time Services

Boot services are those functions that must beexecuted very early in the OS load process, be-fore most of the rest of the OS initializes. Theseservices include the ACPI subsystem initializa-tion, ACPI hardware initialization, and execu-tion of the _INI control methods for various de-vices within the ACPI namespace.

2.1.2 Device Driver Load-Time Services

For the devices that appear in the ACPI names-pace, the operating system must have a mecha-nism to detect them and load device drivers forthem. The Device driver load services providethis mechanism. The ACPI subsystem providesservices to assist with device and bus enumer-ation, resource detection, and setting device re-sources.

2.1.3 OS Run-Time Services

The runtime services include most if not all ofthe external interfaces to the ACPI subsystem.These services also include event logging andpower management functions.

2.1.4 Asynchronous Services

The asynchronous functions include interruptservicing (System Control Interrupt), Eventhandling and dispatch (Fixed events, GeneralPurpose Events, Notification events, and Oper-ation Region access events), and error handling.

2.2 OSL Required Functionality

There are three basic functions of the OS Ser-vices Layer:

Page 6: Acpi Linux

56 • ACPI in Linux

Host Operating System

ACPI Core Subsystem

ACPI Subsystem

OS Services Layer

Requests To Host OS

Figure 4: ACPI Subsystem to Operating Sys-tem Request Flow

1. Manage the initialization of the entireACPI subsystem, including both the OSLand ACPI Core Subsystems.

2. Translate requests for ACPI services fromthe host operating system (and its appli-cations) into calls to the Core Subsystemcomponent. This is not necessarily a one-to-one mapping. Very often, a single op-erating system request may be translatedinto many calls into the ACPI Core Sub-system.

3. Implement an interface layer that the CoreSubsystem component uses to obtain op-erating system services. These standardinterfaces (referred to in this document asthe ACPI OS interfaces) include functionssuch as memory management and threadscheduling, and must be implemented us-ing the available services of the host oper-ating system.

2.2.1 Requests from ACPI Subsystem toOS

The ACPI subsystem requests OS services viathe OSL shown in Figure 4. These requestsmust be serviced (and therefore implemented)in a manner that is appropriate to the host oper-ating system. These requests include calls forOS dependent functions such as I/O, resourceallocation, error logging, and user interaction.The ACPI Component Architecture defines in-terfaces to the OS Services Layer for this pur-pose. These interfaces are constant (i.e., theyare OS-independent), but they must be imple-mented uniquely for each target OS.

2.3 ACPICA—more details

The ACPICA APIs are documented in de-tail in the ACPICA Component ArchitectureProgrammer Referenceavailable onhttp://www.intel.com .

The ACPI header files inlinux/include/

acpi/ can also be used as a reference, as canthe ACPICA source code in the directories un-der linux/drivers/acpi/ .

3 ACPI in Linux

The ACPI specification describes platform reg-isters, ACPI tables, and operation of the ACPIBIOS. It also specifies AML (ACPI MachineLanguage), which the BIOS exports via ACPItables to abstract the hardware. AML is exe-cuted by an interpreter in the ACPI OS.4

In some cases the ACPI specification describesthe sequence of operations required by the

4ACPI OS: an ACPI-enabled OS, such as Linux.

Page 7: Acpi Linux

2005 Linux Symposium • 57

ACPI Tables

Platform BIOS, Firmware

Platform Hardware

ACPI Registers

ACPI BIOS

ACPICA Core

Linux/ACPI

acpid

/proc/acpi

User

Kernel

OSL

Button

Processor

Battery

AC

Thermal Fan

ACPI Specification

Platform Defined

Figure 5: Implementation Architecture

ACPI OS—but generally the OS implementa-tion is left as an exercise to the reader.

There is no platform ACPI compliance test toassure that platforms and platform BIOS’ arecompliant to the ACPI specification. Systemmanufacturers assume compliance when avail-able ACPI-enabled operating systems boot andfunction properly on their systems.

Figure 5 shows these ACPI components log-ically as a layer above the platform specifichardware and firmware.

The ACPI kernel support centers around theACPICA core. ACPICA implements the AMLinterpreter as well as other OS-agnostic partsof the ACPI specification. The ACPICA codedoes not implement any policy, that is the realmof the Linux-specific code. A single file,osl.c , glues ACPICA to the Linux-specific func-tions it requires.

The box in Figure 5 labeled “Linux/ACPI” rep-

resents the Linux-specific ACPI code, includ-ing boot-time configuration.

Optional “ACPI drivers,” such as Button, Bat-tery, Processor, etc. are (optionally loadable)modules that implement policy related to thosespecific features and devices.

There are about 200 ACPI-related files in theLinux kernel source tree—about 130 of themare from ACPICA, and the rest are specific toLinux.

4 Processor Power management

Processor power management is a key ingre-dient in system power management. Manag-ing processor speed and voltage based on uti-lization is effective in increasing battery lifeon laptops, reducing fan noise on desktops,and lowing power and cooling costs on servers.This section covers recent and upcoming Linuxchanges related to Processor Power Manage-ment.

4.1 Overview of Processor Power States

But first refer to Figure 6 for this overview ofprocessor power management states.

1. G0—System Working State. Processorpower management states have meaningonly in the context of a running system—not when the system is in one of its varioussleep or off-states.

2. Processor C-state: C0 is the executingCPU power state. C1–Cn are idle CPUpower states used by the Linux idle loop;no instructions are executed in C1–Cn.The deeper the C-state, the more power issaved, but at the cost of higher latency toenter and exit the C-state.

Page 8: Acpi Linux

58 • ACPI in Linux

G0 (S0)Working

G1 - Sleeping

C1 - idle

C0 - Execute

C3 - idle

C2 - idle

Throttling

P-states

S1 - Standby

S4 - Hibernate

S3 - Suspend

G2 (S5) –Soft off

G3 – Mech off

Legacy

Figure 6: ACPI Global, CPU, and Sleep states

3. Processor P-state: Performance states con-sist of states representing different proces-sor frequencies and voltages. This pro-vides an opportunity to OS to dynamicallychange the CPU frequency to match theCPU workload.

As power varies with the square of volt-age, the voltage-lowing aspect of p-statesis extremely effective at saving power.

4. Processor T-state: Throttle states changethe processor frequency only, leaving thevoltage unchanged.

As power varies directly with frequency,T-states are less effective than P-states forsaving processor power. On a system withboth P-states and T-states, Linux uses T-states only for thermal (emergency) throt-tling.

4.2 Processor Power Saving Example

Table 1 illustrates that high-volume hardwareoffers dramatic power saving opportunities tothe OS through these mechanisms.5 Note thatthese numbers reflect processor power, and donot include other system components, such asthe LCD chip-set, or disk drive. Note also thaton this particular model, the savings in the C1,C2, and C3 states depend on the P-state theprocessor was running in when it became idle.This is because the P-states carry with them re-duced voltage.

C-State P-State MHz Volts Watts

C0 P0 1600 1.484 24.5P1 1300 1.388 22P2 1100 1.180 12P3 600 0.956 6

C1, C2 from P0 0 1.484 7.3from P3 0 0.956 1.8

C3 from P0 0 1.484 5.1from P3 0 0.956 1.1

C4 (any) 0 0.748 0.55

Table 1: C-State and P-State Processor Power

4.3 Recent Changes

4.3.1 P-state driver

The Linux kernel cpufreq infrastructure hasevolved a lot in past few years, becominga modular interface which can connect var-ious vendor specific CPU frequency chang-ing drivers and CPU frequency governorswhich handle the policy part of CPU fre-quency changes. Recently different vendorshave different technologies, that change the

5Ref: 1600MHz Pentium M processor Data-sheet.

Page 9: Acpi Linux

2005 Linux Symposium • 59

CPU frequency and the CPU voltage, bring-ing with it much higher power savings thansimple frequency changes used to bring before.This combined with reduced CPU frequency-changing latency (10uS–100uS) provides a op-portunity for Linux to do more aggressivepower savings by doing a frequent CPU fre-quency change and monitoring the CPU utiliza-tion closely.

The P-state feature which was commonin laptops is now becoming common onservers as well. acpi-cpufreq andspeedstep-centrino drivers have beenchanged to support SMP systems. Thesedrivers can run with i386 and x86-64 kernel onprocessors supporting Enhanced Intel Speed-step Technology.

4.3.2 Ondemand governor

One of the major advantages that recent CPUfrequency changing technologies (like En-hanced Intel SpeedStep Technology) brings islower latency associated with P-state changesof the order of 10mS. In order to reap maximumbenefit, Linux must perform more-frequent P-state transitions to match the current processorutilization. Doing frequent transitions with auser-level daemon will involve more kernel-to-user transitions, as well as a substantial amountof kernel-to-user data transfer. An in-kernelP-state governor, which dynamically monitorsthe CPU usage and makes P-state decisionsbased on that information, takes full advantageof low-latency P-state transitions. The onde-mand policy governor is one such in-kernel P-state governor. The basic algorithm employedwith the ondemand (as in Linux-2.6.11) gover-nor is as follows:

Every X millisecondsGet the current CPU utilization

If (utilization > UP_THRESHOLD %)Increase the P-stateto the maximum frequency

Every Y millisecondsGet the current CPU utilizationIf (utilization < DOWN_THRESHOLD %)

Decrease P-stateto next available lower frequency

The ondemand governor, when supported bythe kernel, will be listed in the/sys interfaceunder scaling_available_governors .Users can start using the ondemand governoras the P-state policy governor by writing ontoscaling_governor :

# cat scaling_available_governorsondemand user-space performance# echo ondemand > scaling_governor# cat scaling_governorondemand

This sequence must be repeated on all the CPUspresent in the system. Once this is done, theondemand governor will take care of adjustingthe CPU frequency automatically, based on thecurrent CPU usage. CPU usage is based ontheidle_ticks statistics. Note: On systemsthat do not support low latency P-state transi-tions, scaling_governor will not changeto “ondemand” above. A single policy gover-nor cannot satisfy all of the needs of applica-tions in various usage scenarios, the ondemandgovernor supports a number of tuning parame-ters. More details about this can be found onIntel’s web site.6

4.3.3 cpufreq stats

Another addition to cpufreq infrastructureis the cpufreq stats interface. This interface

6Enhanced Intel Speedstep Technology for the Pen-tium M Processor.

Page 10: Acpi Linux

60 • ACPI in Linux

appears in /sys/devices/system/cpu/

cpuX/cpufreq/stats , whenever cpufreq isactive. This interface provides the statisticsabout frequency of a particular CPU over time.It provides

• Total number of P-state transitions on thisCPU.

• Amount of time (in jiffies) spent in eachP-state on this CPU.

• And a two-dimensional (n x n) matrix withvalue count(i,j) indicating the number oftransitions from Pi to Pj.

• A top -like tool can be built over this in-terface to show the system wide P-statestatistics.

4.3.4 C-states and SMP

Deeper C-states (C2 and higher) are mostlyused on laptops. And in today’s kernel, C-states are only supported on UP systems. But,soon laptop CPUs will be becoming Dual-Core.That means we need to support C2 and higherstates on SMP systems as well. Support for C2and above on SMP is in the base kernel nowready for future generation of processors andplatforms.

4.4 Upcoming Changes

4.4.1 C4, C5, . . .

In future, one can expect more deeper C stateswith higher latencies. But, with Linux kerneljiffies running at 1mS, CPU may not stay longenough in a C-state to justify entering C4, C5states. This is where we can use the existingvariable HZ solution and can make use of more

deeper C-states. The idea is to reduce the rateof timer interrupts (and local APIC interrupts)when the CPU is idle. That way a CPU can stayin a low power idle state longer when they areidle.

4.4.2 ACPI 3.0 based Software coordina-tion for P and C states

ACPI 3.0 supports having P-state and C-statedomains defined across CPUs. A domain willinclude all the CPUs that share P-state and/orC-state. Using these information from ACPIand doing the software coordination of P-statesand C-states across their domains, OS can havemuch more control over the actual P-states andC-states and optimize the policies on systemsrunning with different configuration.

Consider for example a 2-CPU package sys-tem, with 2 cores on each CPU. Assume thetwo cores on the same package share the P-states (means both cores in the same packagechange the frequency at the same time). IfOS has this information, then if there are only2 threads running, OS, can schedule them ondifferent cores of same package and move theother package to lower P-state thereby savingpower without loosing significant performance.

This is a work in progress, to support softwarecoordination of P-states and C-states, wheneverCPUs share the corresponding P and C states.

5 ACPI support for CPU and Mem-ory Hot-Plug

Platforms supporting physical hot-add and hotremove of CPU/Memory devices are enteringthe systems market. This section covers a va-riety of recent changes that went into kernel

Page 11: Acpi Linux

2005 Linux Symposium • 61

specifically to enable ACPI based platform tosupport the CPU and Memory hot-plug tech-nology.

5.1 ACPI-based Hot-Plug Introduction

The hot-plug implementation can be viewed astwo blocks, one implementing the ACPI spe-cific portion of the hot-plug and the other nonACPI specific portion of the hot-plug.

The non-ACPI specific portion ofCPU/Memory hot-plug, which is beingactively worked by the Linux community,supports what is know as Logical hot-plug.Logical hot-plug is just removing or adding thedevice from the operating system perspective,but physically the device still stays in the sys-tem. In the CPU or Memory case, the devicecan be made to disappear or appear from theOS perspective by echoing either 0 or 1 tothe respective online file. Refer to respectivehot-plug paper to learn more about the logicalonline/off-lining support of these devices.The ACPI specific portion of the hot-plug iswhat bridges the gap between the platformshaving the physical hot-plug capability to takeadvantage of the logical hot-plug in the kernelto provide true physical hot-plug. ACPI isnot involved in the logical part of on-lining oroff-lining the device.

5.2 ACPI Hot-Plug Architecture

At the module init time we search the ACPI de-vice namespace. We register a system notifyhandler callback on each of the interested de-vices. In case of CPU hot-plug support we lookfor ACPI_TYPE_PROCESSOR_DEVICEand incase of Memory hot-plug support we look for

PNP0C80 HID7 and in case of container8 welook for ACPI004 or PNP0A06 or PNP0A05devices.

When a device is hot-plugged, the core chip-set or the platform raises the SCI,9 the SCIhandler within the ACPI core clears the GPEevent and runs _Lxx10 method associated withthe GPE. This _Lxx method in turn executesNotify(XXXX, 0) and notifies the ACPI core,which in turn notifies the hot-plug modulescallback which was registered during the mod-ule init time.

When the module gets notified, the module no-tify callback handler looks for the event codeand takes appropriate action based on the event.See the module Design section for more details.

5.3 ACPI Hot-Plug support Changes

The following enhancements were made tosupport physical Memory and/or CPU devicehot-plug.

• A new acpi_memhotplug.c modulewas introduced into the drives/acpi direc-tory for memory hot-plug.

• The existing ACPI processor driver wasenhanced to support the ACPI hot-plug notification for the physical inser-tion/removal of the processor.

• A new container module was introducedto support hot-plug notification on a ACPI

7HID, Hardware ID.8A container device captures hardware dependencies,

such as a Processor and Memory sharing a single remov-able board.

9SCI, ACPI’s System Control Interrupt, appears as“acpi” in /proc/interrupts .

10_Lxx - L stands for level-sensitive, xx is the GPEnumber, e.g. GPE 42 would use _L42 handler.

Page 12: Acpi Linux

62 • ACPI in Linux

container device. The ACPI container de-vice can contain multiple devices, includ-ing another container device.

5.4 Memory module

A newacpi_memhotplug.c driver was in-troduced which adds support for the ACPIbased Memory hot-plug. This driver pro-vides support for fielding notifications on ACPImemory device (PNP0C80) which representsmemory ranges that may be hot-added or hotremoved during run time. This driver is en-abled by enablingCONFIG_ACPI_HOTPLUG_

MEMORYin the con fig file and is required onACPI platforms supporting physical Memoryhot plug of the Memory DIMMs (at some plat-form granularity).

Design: The memory hot-plug module’s de-vice notify callback gets called when the mem-ory device is hot-plug plugged. This handlerchecks for the event code and for hot-add case,first checks the device for physical presenceand reads the memory range reported by the_CRS method and tells the VM about the newdevice. The VM which resides outside of ACPIis responsible for actual addition of this rangeto the running kernel. The ACPI memory hot-plug module does not yet implement the hot-remove case.

5.5 Processor module

The ACPI processor module can now supportphysical CPU hot-plug by enablingCONFIG_

ACPI_HOTPLUG_CPUunder CONFIG_ACPI_

PROCESSOR.

Design: The processor hot-plug module’s de-vice notify callback gets called when the pro-cessor device is hot plugged. This handler

checks for the event code and for the hot-add case, it first creates the ACPI device bycalling acpi_bus_add() andacpi_bus_scan() and then notifies the user mode agentby invoking kobject_hotplug() usingthe kobj of the ACPI device that got hot-plugged. The user mode agent in turn on-linesthe corresponding CPU devices by echoing onto the online file. Theacpi_bus_add()would invoke the.add method of the proces-sor module which in turn sets up theapic_idto logical_id required for logical online.

For the remove case, the notify callback han-dler in turn notifies the event to the user modeagent by invoking kobject_hotplug()using the kobj of the ACPI device that got hot-plugged. The user mode first off-lines the de-vice and then echoes 1 on to the eject file un-der the corresponding ACPI namespace devicefile to remove the device physically. This ac-tion leads to call into the kernel mode rou-tine calledacpi_bus_trim() which in turncalls the .remove method of the processordriver which will tear the ACPI id to logical idmappings and releases the ACPI device.

5.6 Container module

ACPI defines a Container device with the HIDbeing ACPI004 or PNP0A06 or PNP0A05.This device can in turn contain other devices.For example, a container device can containmultiple CPU devices and/or multiple Memorydevices. On a platform which supports hotplugnotify on Container device, this driver needsto be enabled in addition to the above devicespecific hotplug drivers. This driver is enabledby enablingCONFIG_ACPI_CONTAINERin theconfig file.

Design: The module init is pretty much thesame as the other driver where in we regis-ter for the system notify callback on to every

Page 13: Acpi Linux

2005 Linux Symposium • 63

container device with in the ACPI root names-pace scope. Thecontainer_notify_cb() gets called when the container deviceis hot-plugged. For the hot-add case it firstcreates an ACPI device by callingacpi_bus_add() andacpi_bus_scan() . Theacpi_bus_scan() which is a recursive callin turns calls the.add method of the respec-tive hotplug devices. When theacpi_bus_scan() returns the container driver notifiesthe user mode agent by invokingkobject_hotplug() using kobj of the container de-vice. The user mode agent is responsible tobring the devices to online by echoing on to theonline file of each of those devices.

5.7 Future work

ACPI-based, NUMA-node hotplug support (al-though there are a little here and there patchesto support this feature from different hardwarevendors). Memory hot-remove support andhandling physical hot-add of memory devices.This should be done in a manner consistentwith the CPU hotplug—first kernel mode doessetup and notifies user mode, then user modebrings the device on-line.

6 PNPACPI

The ACPI specification replaces the PnP BIOSspecification. As of this year, on a platform thatsupports both specifications, the Linux PNPACPI code supersedes the Linux PNP BIOScode. The ACPI compatible BIOS defines allPNP devices in its ACPI DSDT.11 Every ACPIPNP device defines a PNP ID, so the OS canenumerate this kind of device through the PNP

11DSDT, Differentiated System Description Table, theprimary ACPI table containing AML

pnpacpi_get_resources()pnpacpi_parse_allocated_resource() /* _CRS */

pnpacpi_disable_resources()acpi_evaluate_object (_DIS) /* _DIS */

pnpacpi_set_resources()pnpacpi_build_resource_template() /* _CRS */pnpacpi_encode_resources() /* _PRS */acpi_set_current_resources() /* _SRS */

Figure 7: ACPI PNP protocol callback routines

ID. ACPI PNP devices also define some meth-ods for the OS to manipulate device resources.These methods include _CRS (return currentresources), _PRS (return all possible resources)and _SRS (set resources).

The generic Linux PNP layer abstractsPNPBIOS and ISAPNP, and some drivers usethe interface. A natural thought to add ACPIPNP support is to provide a PNPBIOS-likedriver to hook ACPI with PNP layer, which iswhat the current PNPACPI driver does. Fig-ure 7 shows three callback routines required forPNP layer and their implementation overview.In this way, existing PNP drivers transparentlysupport ACPI PNP. Currently there are stillsome systems whose PNPACPI does not work,such as the ES7000 system. Boot optionpnpacpi=off can disable PNPACPI.

Compared with PNPBIOS, PNPACPI does notneed to call into 16-bit BIOS. Rather it di-rectly utilizes the ACPICA APIs, so it is fasterand more OS friendly. Furthermore, PNPACPIworks even under IA64. In the past on IA64,ACPI-specific drivers such as 8250_acpi driverwere written. But since ACPI PNP works onall platforms with ACPI enabled, existing PNPdrivers can work under IA64 now, and so thespecific ACPI drivers can be removed. We didnot remove all the drivers yet for the reason ofstabilization (PNPACPI driver must be widelytested)

Another advantage of ACPI PNP is that it sup-

Page 14: Acpi Linux

64 • ACPI in Linux

ports device hotplug. A PNP device can definesome methods (_DIS, _STA, _EJ0) to supporthotplug. The OS evaluates a device’s _STAmethod to determine the device’s status. Everytime the device’s status changes, the device willreceive a notification. Then the OS registereddevice notification handler can hot add/removethe device. An example of PNP hotplug is adocking station, which generally includes somePNP devices and/or PCI devices.

In the initial implementation of ACPI PNP, weregister a default ACPI driver for all PNP de-vices, and the driver will hook the ACPI PNPdevice to PNP layer. With this implementation,adding an ACPI PNP device will automaticallyput the PNP device into Linux PNP layer, so thedriver is hot-pluggable. Unfortunately, the fea-ture conflicted with some specific ACPI drivers(such as 8250_acpi), so we removed it. We willreintroduce the feature after the specific ACPIdrivers are removed.

7 Hot-Keys

Keys and buttons on ACPI-enabled systemscome in three flavors:

1. Keyboard keys, handled by the keyboarddriver/Linux input sub-system/X-windowsystem. Some platforms add additionalkeys to the keyboard hardware, and theinput sub-system needs to be augmentedto understand them through utilities tomap scan-codes to characters, or thoughmodel-specific keyboard drivers.

2. Power, Sleep, and Lid buttons. Thesethree buttons are fully described by theACPI specification. The kernel’s ACPIbutton.c driver sends these events to user-space via/proc/acpi/event . A user-space utility such asacpid(8) is re-sponsible for deciding what to do with

them. Typically shutdown is invoked onpower button events, and suspend is in-voked for sleep or lid button events.

3. The “other” keys are generally called “hot-keys,” and have icons on them describingvarious functions such as display outputswitching, LCD brightness control, WiFiradio, audio volume control etc.

Hot-keys may be implemented in a variety ofways, even within the same platform.

• Full BIOS control: Here hot-keys triggeran SMI, and the SMM BIOS12 will handleeverything. Using this method, the hot-key is invisible to the kernel—to the OSthey are effectively done “in hardware.”

The advantage is that the buttons will dotheir functions successfully, even in thepresence of an ignorant or broken OS.

The disadvantage is that the OS is com-pletely un-aware that these functions areoccurring and thus has no opportunityto optimize its policies. Also, as theSMI/SMM is shipped by the OEM in theBIOS, users are unable to either fix it whenit is broken, or customize it in any way.

Some systems include this SMI-based hot-key mechanism, but disable it when anACPI-enabled OS boots and puts the sys-tem into ACPI-mode.

• Self-contained AML methods: from auser’s—even a kernel programmer’s—point of view, method is analogous to thefull-BIOS control method above. The OSis is un-aware that the button is pressedand what the button does. However, theOS actually supplies the mechanics for

12SMI, System Management Interrupt; SMM, SystemManagement Mode—an interrupt that sends the proces-sor directly into BIOS firmware.

Page 15: Acpi Linux

2005 Linux Symposium • 65

this kind of button to work, It wouldnot work if the OS’s interrupts and ACPIAML interpreter were not available.

Here a GPE13 causes an ACPI interrupt.The ACPI sub-system responds to the in-terrupt, decodes which GPE caused it, andvectors to the associated BIOS-suppliedGPE handler (_Lxx/_Exx/_Qxx). Thehandler is supplied by the BIOS in AML,and the kernel’s AML interpreter makeit run, but the OS is not informed aboutwhat the handler does. The handler in thisscenario is hard-coded to tickle whateverhardware is necessary to to implement thebutton’s function.

• Event based: This is a platform-specificmethod. Each hot-key event triggers a cor-responding hot-key event from/proc/

acpi/event to notify user space dae-mon, such asacpid(8) . Then, acpidmust execute corresponding AML meth-ods for hot-key function.

• Polling based: Another non-standard im-plementation. Each hot-key pressingwill trigger a polling event from/proc/

acpi/event to notify user space daemonacpid to query the hot-key status. Thenacpid should call related AML methods.

Today there are several platform specific“ACPI” drivers in the kernel tree suchas asus_acpi.c , ibm_acpi.c , andtoshiba_acpi.c , and there are even moreof this group out-of-tree. The problem withthese drivers is that they work only for theplatforms they’re designed for. If you don’thave that platform, it doesn’t help you. Also,the different drivers perform largely the samefunctions.

There are many different platform vendors,and so producing and supporting a platform-

13GPE, General Purpose Event.

specific driver for every possible vendor is not agood strategy. So this year several efforts havebeen made to unify some of this code, with thegoal that the kernel contain less code that workson more platforms.

7.1 ACPI Video Control Driver

The ACPI specification includes an ap-pendix describing ACPI Extensions for DisplayAdapters. This year, Bruno Ducrot created theinitial acpi/video.c driver to implement it.

This driver registers notify handlers on theACPI video device to handle events. It also ex-ports files in/proc for manual control.

The notify handlers in the video driver are suf-ficient on many machines to make the dis-play control hot-keys work. This is becausethe AML GPE handlers associated with thesebuttons simply issue a Notify() event on thedisplay device, and if thevideo.c driver isloaded and registered on that device, it receivesthe event and invokes the AML methods asso-ciated with the request via the ACPI interpreter.

7.2 Generic Hot-Key Driver

More recently, Luming Yu created a generichot-key driver with the goal to factor thecommon code out of the platform-specificdrivers. This driver is intended to support twonon-standard hot-key implementations—event-based and polling-based.

The idea is that configurable interfaces can beused to register mappings between event num-ber and GPEs associated with hot-keys, andmappings between event number and AMLmethods, then we don’t need the platform-specific drivers.

Page 16: Acpi Linux

66 • ACPI in Linux

Here the user-space daemon, acpid, needs to is-sue a request to an interface for the execution ofthose AML methods, upon receiving a specifichot-key GPE. So, the generic hot-key driver im-plements the following interfaces to meet therequirements of non-standard hot-key.

• Event based configure interface,/proc/

acpi/hotkey/event_config .

– Register mappings of event numberto hot-key GPE.

– Register ACPI handle to install no-tify handler for hot-key GPE.

– Register AML methods associatedwith hot-key GPE.

• Polling based configure interface,/proc/

acpi/hotkey/poll_config .

– Register mappings of event numberto hot-key polling GPE.

– Register ACPI handle to install no-tify handler for hot-key polling GPE.

– Register AML methods associatedwith polling GPE

– Register AML methods associatedwith hot-key event.

• Action interface,/proc/acpi/hotkey/

action .

– Once acpid knows which event istriggered, it can issue a request tothe action interface with argumentsto call corresponding AML methods.

– For polling based hot-key, onceacpid knows the polling event trig-gered, it can issue a request to the ac-tion interface to call polling method,then it can get hot-key event numberaccording to the results from pollingmethods. Then, acpid can issue an-other request to action interface to

invoke right AML methods for thathot-key function.

The current usage model for this driver requiressome hacking—okay for programmers, but notokay for distributors. Before using the generichot-key driver for a specific platform, you needto figure out how vendor implemented hot-keyfor it. If it just belongs to the first two standardclasses, the generic hot-key driver is useless.Because, the hot-key function can work withoutany hot-key driver including this generic one.Otherwise, you need to flow these steps.

• Disassemble DSDT.

• Figure out the AML method of hot-keyinitialization.

• Observing /proc/acpi/event to findout the corresponding GPE associatedwith each hot-key.

• Figure out the specific AML methods as-sociated with each hot-key GPE.

• After collecting sufficient information,you can configure them through interfacesof event_config , poll_config .

• Adjust scripts for acpid to issue right com-mand to action interface.

The hope is that this code will evolve intosomething that consolidates, or at least miti-gates a potential explosion in platform-specificdrivers. But to reach that goal, it will needto be supportable without the complicated ad-ministrator incantations that it requires today.The current thinking is that the additions ofa quirks table for configuration may take thisdriver from prototype to something that “justworks” on many platforms.

Page 17: Acpi Linux

2005 Linux Symposium • 67

8 Acknowledgments

It has been a busy and productive year onLinux/ACPI. This progress would not havebeen possible without the efforts of the manydevelopers and testers in the open source com-munity. Thank you all for your efforts and yoursupport—keep up the good work!

9 References

Hewlett-Packard, Intel, Microsoft, Phoenix,ToshibaAdvanced Configuration & PowerSpecification, Revision 3.0, September 2,2004.http://www.acpi.info

ACPICA Component Architecture ProgrammerReference, Intel Corporation.

Linux/ACPI Project Home page:http://acpi.sourceforge.net

Page 18: Acpi Linux

68 • ACPI in Linux

Page 19: Acpi Linux

Proceedings of theLinux Symposium

Volume One

July 20nd–23th, 2005Ottawa, Ontario

Canada

Page 20: Acpi Linux

Conference Organizers

Andrew J. Hutton,Steamballoon, Inc.C. Craig Ross,Linux SymposiumStephanie Donovan,Linux Symposium

Review Committee

Gerrit Huizenga,IBMMatthew Wilcox,HPDirk Hohndel,IntelVal Henson,Sun MicrosystemsJamal Hadi Salimi,ZnyxMatt Domsch,DellAndrew Hutton,Steamballoon, Inc.

Proceedings Formatting Team

John W. Lockhart,Red Hat, Inc.

Authors retain copyright to all submitted papers, but have granted unlimited redistribution rightsto all as a condition of submission.