22
7 System Interconnect 2014.08.18 a10_54004 Subscribe Send Feedback The components of the hard processor system (HPS) communicate with one another, and with other portions of the SoC device, through the system interconnect. The system interconnect consists of the following blocks: The main level 3 (L3) interconnect The SDRAM L3 interconnect The level 4 (L4) buses The system interconnect is the main communication bus for the MPU and all IPs in the SoC device. The system interconnect is implemented by the Arteris ® FlexNoC network-on-chip (NoC) interconnect module. The system interconnect supports the following features: Seven independent L4 buses running in several clock domains. These buses handle data traffic between peripherals at multiple performance levels, implementing the following protocols: ARM Advanced Microcontroller Bus Architecture (AMBA ® ) Advanced eXtensible Interface (AXI ) Advanced High-Performance Bus (AHB ) AHB-Lite Advanced Peripheral Bus (APB ) Open Core Protocol (OCP) SDRAM L3 interconnect, providing access to a hard memory controller in the FPGA fabric On-chip debugging and tracing capabilities Security firewalls implementing the following capabilities: Secure or nonsecure access configured per peripheral Privileged or user access configured per peripheral (for some peripherals) Security optionally configured per transaction Related Information www.arteris.com For detailed information about the FlexNoC Network-on-Chip Interconnect, refer to the Arteris website. ISO 9001:2008 Registered © 2014 Altera Corporation. All rights reserved. ALTERA, ARRIA, CYCLONE, ENPIRION, MAX, MEGACORE, NIOS, QUARTUS and STRATIX words and logos are trademarks of Altera Corporation and registered in the U.S. Patent and Trademark Office and in other countries. All other words and logos identified as trademarks or service marks are the property of their respective holders as described at www.altera.com/common/legal.html. Altera warrants performance of its semiconductor products to current specifications in accordance with Altera's standard warranty, but reserves the right to make changes to any products and services at any time without notice. Altera assumes no responsibility or liability arising out of the application or use of any information, product, or service described herein except as expressly agreed to in writing by Altera. Altera customers are advised to obtain the latest version of device specifications before relying on any published information and before placing orders for products or services. www.altera.com 101 Innovation Drive, San Jose, CA 95134

System Interconnect 7

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: System Interconnect 7

7System Interconnect

2014.08.18

a10_54004 Subscribe Send Feedback

The components of the hard processor system (HPS) communicatewith one another, andwith other portionsof the SoC device, through the system interconnect. The system interconnect consists of the following blocks:

• The main level 3 (L3) interconnect• The SDRAM L3 interconnect• The level 4 (L4) buses

The system interconnect is the main communication bus for the MPU and all IPs in the SoC device.

The system interconnect is implemented by the Arteris® FlexNoC™ network-on-chip (NoC) interconnectmodule.

The system interconnect supports the following features:

• Seven independent L4 buses running in several clock domains. These buses handle data traffic betweenperipherals at multiple performance levels, implementing the following protocols:

• ARM Advanced Microcontroller Bus Architecture (AMBA®) Advanced eXtensible Interface (AXI™)• Advanced High-Performance Bus (AHB™)• AHB-Lite• Advanced Peripheral Bus (APB™)• Open Core Protocol (OCP)

• SDRAM L3 interconnect, providing access to a hard memory controller in the FPGA fabric• On-chip debugging and tracing capabilities• Security firewalls implementing the following capabilities:

• Secure or nonsecure access configured per peripheral• Privileged or user access configured per peripheral (for some peripherals)• Security optionally configured per transaction

Related Informationwww.arteris.comFor detailed information about the FlexNoC Network-on-Chip Interconnect, refer to the Arteris website.

ISO9001:2008Registered

© 2014 Altera Corporation. All rights reserved. ALTERA, ARRIA, CYCLONE, ENPIRION, MAX, MEGACORE, NIOS, QUARTUS and STRATIX wordsand logos are trademarks of Altera Corporation and registered in the U.S. Patent and Trademark Office and in other countries. All otherwords and logos identified as trademarks or service marks are the property of their respective holders as described atwww.altera.com/common/legal.html. Altera warrants performance of its semiconductor products to current specifications in accordance withAltera's standard warranty, but reserves the right to make changes to any products and services at any time without notice. Altera assumesno responsibility or liability arising out of the application or use of any information, product, or service described herein except as expresslyagreed to in writing by Altera. Altera customers are advised to obtain the latest version of device specifications before relying on any publishedinformation and before placing orders for products or services.

www.altera.com

101 Innovation Drive, San Jose, CA 95134

Page 2: System Interconnect 7

About the System Interconnect

Features of the System InterconnectThe system interconnect supports high-throughput peripheral devices. The system interconnect has thefollowing characteristics:

• Byte oriented address handling• Data bus width up to 128 bytes• Handles byte enables• Firewall and security support• Dedicated SDRAM Scheduler• On-chip debug and tracing capabilities• Security of the following types:

• Secure• Nonsecure• Per transaction security

• Seven independent L4 buses

The system interconnect has the following characteristics:

• Security firewall support

• Configure secure or non-secure access per peripheral• Configure privilege or user access for some of the peripherals• Per transaction security

• Dedicated SDRAM L3 interconnect• On-chip debug and tracing capabilities• Seven independent L4 buses that operate on different clock domains and protocols

The L4 buses are each connected to a master in the main L3 interconnect. Each L4 bus is 32 bits wide andis connected to multiple slaves. Each L4 bus operates on a separate clock source.

System Interconnect Block Diagram and System Integration

Related InformationHPS-FPGA BridgesFor more information, refer to the HPS-FPGA Bridges chapter in the Hard Processor System TechnicalReference Manual.

System Interconnect Block DiagramsThe following figures show the system interconnect, including the main L3 interconnect, SDRAM L3interconnect, and L4 buses.

System InterconnectAltera Corporation

Send Feedback

a10_54004About the System Interconnect7-2 2014.08.18

Page 3: System Interconnect 7

Figure 7-1: High-Level System Interconnect Block Diagram

The following figure shows the relationships among the system interconnect andothermajor SoC components.

MPU Subsystem HPS Peripherals

SDRAM L3Interconnect L3 Interconnect

FPGA CoreHMCSDRAM

System Interconnect

Related InformationMaster-to-Slave Connectivity Matrix on page 7-8

Figure 7-2: On-Chip RAM Connections with Firewall

Up to six regions of the on-chip RAM can be partitioned for nonsecure accesses.

On-Chip RAMFirewall

MPU DAP FPGA to HPSBridgeETR DMA

On-Chip RAM

USB OTGSD/MMCEMACNAND

Altera CorporationSystem Interconnect

Send Feedback

7-3a10_540042014.08.18

Page 4: System Interconnect 7

Figure 7-3: Peripherals Connections with Firewall

The peripherals firewall can be programmed to specify the security status of all available masters.

PeripheralsFirewall

MPU DAP FPGA to HPSBridgeDMA

DMA (s/ns registers)SPI 0/1/2/3

L4 Main

EMAC 0/1/2/3SD/MMCQSPI

L4 Master Peripheral

QSPI DataNAND DataUSB OTG 0/1

L4 AHB

UART 0/1I2C 0/1/2/3/4SP Timer 0/1GPIO 0/1

L4 Slave Peripheral

Figure 7-4: System Connections with Firewall

The system firewall filters accesses to all system peripherals.

OSC 0/1 TimerWatchdog 0/1 TimerSystem ManagerReset ManagerClock ManagerFPGA Manager

L3 CSRL4 Firewall CSR

SystemFirewall

L4 System Bus L4 ECC Bus L4 DAP Bus

DAP STMSD/MMC ECC

On-Chip RAM ECCDMA ECCQSPI ECCNAND ECC

USB 0/1 OTG ECCEMAC 0/1/2 ECC

MPU DAP FPGA-to-HPSBridgeETR DMA

System InterconnectAltera Corporation

Send Feedback

a10_540047-4 2014.08.18

Page 5: System Interconnect 7

Figure 7-5: HPS-to-FPGA Bridge Connections with Firewall

The HPS-to-FPGA bridge firewall can be programmed to configure the security policy for each connectionfrom a master to a bridge.

HPS-to-FPGA Firewall

MPU DMA

USB OTGSD/MMCEMACNAND

HPS-to-FPGABridge

LightweightHPS-to-FPGA

Bridge

DAPETR

Figure 7-6: SDRAM Connections with Firewalls

Accesses to cacheable memory that pass through the SDRAM firewall to the MPU Accelerator CoherencyPort (ACP) and do not result in a hit must pass through a secondary SDRAM firewall.

DDR L3Firewall

FPGA-to-HPSMastersMPU DAPETR DMA

USB OTGSD/MMCEMACNAND

SDRAM SchedulerMPUACP

DDRFirewall

FPGA-to-HPSMasters

Path for ACPcache misses

Related InformationSDRAM L3 Interconnect Block Diagram and System Integration on page 7-7More detailed information about the firewall

System Interconnect ArchitectureThe system interconnect is a partially-connected fabric. Not all masters can access all slaves.

The system interconnect is a network on chip (NoC) providing interface widths up to 64 bits, connectingto the L4 slave buses and to HPS and FPGA masters and slaves.

The system interconnect supports the following protocols:

• AXI• AHB• APB• OCP

Altera CorporationSystem Interconnect

Send Feedback

7-5a10_540042014.08.18

Page 6: System Interconnect 7

The system interconnect provides low-latency connectivity to the following bridges:

• HPS-to-FPGA bridge• Lightweight HPS-to-FPGA bridge• FPGA-to-HPS bridge• Three FPGA-to-SDRAM ports

The main portion of the system interconnect runs at up to half the MPU main clock frequency (mpu_clk).

• SDRAM L3 interconnect

• Part of system interconnect• Contain DDR Scheduler• Provide access to the hard memory controller (HMC)

SDRAM L3 Interconnect ArchitectureThe system interconnect includes the SDRAM L3 interconnect.

The SDRAM L3 interconnect provides access to the hard memory controller in the FPGA portion of theSoC device. The SDRAM L3 interconnect includes these components:

• SDRAM scheduler• SDRAM adapter

The SDRAM L3 interconnect runs at the full MPU main clock frequency (mpu_clk), up to 800mhz.

Main Connectivity MatrixPending

Secure FirewallThe system interconnect firewalls are used to enforce security policies for slave and memory regiontransactions within the system interconnect.

Each firewall contains the security configuration registers (SCRs) that set security policies and define whichtransactions are allowed to go through the firewall. There are five main firewalls within the SoC:

• Peripheral• System• HPS-to-FPGA• On-Chip RAM• SDRAM (which includes DDR and DDR L3 firewalls)

Related InformationSoC SecurityFor more information, refer to the SoC Security chapter in the Hard Processor System Technical ReferenceManual.

About the Rate AdapterPending

About the SchedulerPending

System InterconnectAltera Corporation

Send Feedback

a10_54004SDRAM L3 Interconnect Architecture7-6 2014.08.18

Page 7: System Interconnect 7

About the HMC Adapter

SDRAM L3 InterconnectThe hard processor system (HPS) provides a specialized SDRAM L3 interconnect dedicated to SDRAMaccesses.

The SDRAML3 interconnect contains amemory scheduler as well as adapter logic responsible for connectingthe HPS to the SDRAM hard memory controller (HMC) in the FPGA fabric. The SDRAM L3 interconnectis part of the system interconnect, which is implemented by the Arteris FlexNoC IP.

Features of the SDRAM L3 InterconnectThe SDRAM L3 interconnect supports the following features:

• Connectivity to the SDRAM HMC supporting:

• DDR4-SDRAM• DDR3-SDRAM• LPDDR3-SDRAM

• Configurable SDRAM device data widths

• 16-bit, with or without 8-bit error-correcting code (ECC)• 32-bit, with or without 8-bit ECC• 64-bit, with or without 8-bit ECC

• Five high-performance ports

• MPU subsystem port• Main L3 interconnect port• Three FPGA ports

• Two 32-, 64-, or 128-bit ports• One 32- or 64-bit port

• Integrated multi-port SDRAM scheduler port• Firewall and security support port

SDRAM L3 Interconnect Block Diagram and System IntegrationThe SDRAM L3 interconnect is composed of two main blocks: SDRAM adapter and SDRAM scheduler.

The SDRAM adapter is responsible for bridging the hard memory controller (HMC) in the FPGA fabric tothe SDRAM scheduler. The adapter is also responsible for ECC generation and checking.

The ECC register interface provides control to perform memory and ECC logic diagnostics.

The SDRAM scheduler is responsible for traffic-shaping accesses to the SDRAM controller in the FPGAfabric.

Three AXI ports are exposed to the FPGA fabric, allowing soft logic masters to access the SDRAM controllerthrough the same scheduler unit as the MPU subsystem and other masters within the HPS. A 32-bit controlinterface is exposed to themain L3 interconnect providing theMPU subsystem, control access to the SDRAMadapter, and HMC in the FPGA fabric.

Altera CorporationSystem Interconnect

Send Feedback

7-7About the HMC Adaptera10_540042014.08.18

Page 8: System Interconnect 7

SDRAMAdapter

Level 3Interconnect

32/64/128bitAXI

32/64bitAXI

32/64/128bitAXI

64/128/256 bitData + ECC

64 bit AXI

FPGA FabricHPS

SDRAM L3 Interconnect

32 bit

32 bit

32/64/128bitAXI

32/64bitAXI

32/64/128bitAXI

64 bit AXI

64 bit AXI64 bit AXI256 bit

SDRAM I/O (16/32/64 bit Data + ECC)

Hard MemoryController

Firewall

Firewall

Firewall

MPUSubsystem

SDRAMScheduler

FPGAMasters

System Interconnect

The HMC in the FPGA fabric has a dedicated connection to the SDRAM L3 interconnect. This connectionallows the HMC to become operational before the rest of the FPGA has been configured.

About Arbitration

Master-to-Slave Connectivity Matrix

The system interconnect is a partially connected crossbar switch.

The following table shows the connectivity of all the master and slave interfaces in the system interconnect.

System InterconnectAltera Corporation

Send Feedback

a10_54004About Arbitration7-8 2014.08.18

Page 9: System Interconnect 7

Table 7-1: Master-to-Slave Connectivity Matrix

Observability Network

HPS-to-FPGA Bridge

Lightweight HPS-to-FPGA Bridge

DDR

ACP

Boot ROM

On-Chip RAM

STM

DAP

L4 ECC Bus Slaves

L4 SYS Bus Slaves

L4 SP Bus Slaves

L4 AHB Bus Slaves

L4 MP Bus Slaves

L4 Main Bus Slaves

•••••••••••••MPU L2 CacheMaster 0

•MPU L2 CacheMaster 1

•••••••• (1)••••FPGA-to-HPS Bridge

•FPGA-to-SDRAM Bridge

••••••••••••DMA

•••••EMAC 0/1/2

•••••USB 0/1

•••••NAND

•••••SD/MMC

•••••ETR

••••••••••••••DAP

About the Observability NetworkThe observability network connects probes to an observer. It is physically separate from the NoC datapath.

Through the observability network, you can perform these tasks:

• Update master and slave peripheral security features through the DAP

(1) The FPGA-to-HPS bridge is connected to all L4 system slaves except the FPGA manager.

Altera CorporationSystem Interconnect

Send Feedback

7-9About the Observability Networka10_540042014.08.18

Page 10: System Interconnect 7

• Collect HPS counter statistics

Functional Description of the System InterconnectThe system interconnect provides access to a 4 GB address space.

Address spaces are divided into one or more nonoverlapping contiguous regions.

The following figure shows the relationships between the HPS address spaces. The figure is not to scale.

Figure 7-7: HPS Address Space Relationships

SDRAMRegion

SDRAMWindow

L3 SDRAM

FPGASlavesRegion

Peripheral Region

RAM / SDRAM

LightweightFPGA Slaves

4 GB

3 GB

2 GB

1 GB

0 GB

The window regions provide access to other address spaces. The thin black arrows indicate which addressspace is accessed by a window region (arrows point to accessed address space).

The following table shows the base address and size of each region that is common to the L3 and MPUaddress spaces.

Table 7-2: Common Address Space Regions

SizeBase AddressDescriptionRegion Name

960 MB0xC0000000FPGA slavesFPGASLAVES

64 MB0xFC000000PeripheralPERIPH

2 MB0xFF200000Lightweight FPGA slavesLWFPGASLAVES (2)

(2) The LWFPGASLAVES region is part of the "PERIPH" region.

System InterconnectAltera Corporation

Send Feedback

a10_54004Functional Description of the System Interconnect7-10 2014.08.18

Page 11: System Interconnect 7

SDRAM Address SpaceThe SDRAM address space is up to 4 GB. The entire address space can be accessed through the FPGA-to-SDRAM interface from the FPGA fabric. The total amount of SDRAM addressable from the other addressspaces varies.

There are cacheable and non-cacheable views into the SDRAM space. When a master of the systeminterconnect performs a cacheable access to the SDRAM, the transaction is performed through the ACPport of the MPU subsystem. When a master of the system interconnect performs a non-cacheable access tothe SDRAM, the transaction is performed through the 32-bit main L3 interconnect master of the SDRAML3 interconnect.

L3 Address SpaceThe L3 address space is 4 GB and applies to all L3 masters except the MPU subsystem.

The L3 address space configurations contain the following regions:

• The peripheral region is the same as the peripheral region in the MPU address space except that the bootROM and internal MPU registers (SCU and L2) are not accessible.

• The FPGA slaves region provides access to 960 MB of slaves in the FPGA fabric through the HPS-to-FPGA bridge.

• The SDRAM window region is 3 GB and provides access to the bottom 3 GB of the SDRAM addressspace. The system interconnect remap register, in the System Manager determines if the 256 KB startingat address 0x0 is mapped to the on-chip RAM or the SDRAM. The SDRAM is mapped to address 0x0on reset. Any L3 master performing a cacheable access will have a cache coherent view of this memoryspace.

Table 7-3: L3 Address Space Regions

SizeBase AddressDescriptionRegion Name

3 GB0x00000000SDRAM windowL3SDRAM

256 KB0x00000000On-chip RAM remap bit is setL3LOWOCRAM

3071.75 MB (3 GB - 256 KB)0x00100000SDRAM window when on-chip RAM present

L3SDRAMLOWOCRAM

Related InformationIntroduction to the Hard Processor SystemMore information about L3 address space mapping

Address Remapping

Related InformationCortex-A9 MPU SubsystemFormore information, refer to theCortex-A9MPUSubsystem chapter in theHard Processor SystemTechnicalReference Manual.

Secure Transaction ProtectionThe system interconnect provides two levels of secure transaction protection:

• Security firewalls—Enforce the Secure World read and write transactions.

Altera CorporationSystem Interconnect

Send Feedback

7-11SDRAM Address Spacea10_540042014.08.18

Page 12: System Interconnect 7

• Privilege filter—leverages the firewallmechanism and provides additional security by filtering the privilegelevel of L4 slave transactions. The privilege filter applies to writes only.

All slaves on the SoC are placed behind a security firewall. A subset of slaves are also placed behind a privilegefilter. Transactions to these slaves must pass both a security firewall and the privilege filter to reach the slave.

System Interconnect Master PropertiesThe system interconnect connects to various slave interfaces through themain L3 interconnect and SDRAML3 interconnect.

Table 7-4: System Interconnect Master Interfaces

TypeIssuance(Read/Write/Total)

PrivilegeSCR(3)

AccessSecurityClockInterface

WidthMaster

AXI7/12/23Transactionbased

YesPerTransaction

mpu_l2ram_clk

64MPUSubsystemL2 cacheM0/1

AXI8/8/16Transactionbased

YesPerTransaction

fpga2hps_clk128/64FPGA-to-HPSBridge

AXI8/8/8Transactionbased

NoPerTransaction

f2h_sdram_clk[2:0]

128FPGA-to-SDRAMBridge

AXI8/8/8User modeNoPerTransaction

l4_main_clk64DMA

AXI16/16/32User modeNoSecure/Non-Secure

l4_mp_clk32EMAC0/1/2

AHB2/2/4User modeNoNonsecurel4_mp_clk32USB OTG0/1

AXI8/1/9User modeNoNonsecurel4_mp_clk32NAND

AHB2/2/4User modeNoNonsecurel4_mp_clk32SD/MMC

AXI32/1/32Transactionbased

NoPerTransaction

cs_at_clk32ETR

AHB1/1/1Transactionbased

NoPerTransaction

l4_mp_clk32AHB-AP

Related InformationSoC SecurityFor more information, refer to the SoC Security chapter in the Hard Processor System Technical ReferenceManual.

(3) Security control register (SCR)

System InterconnectAltera Corporation

Send Feedback

a10_54004System Interconnect Master Properties7-12 2014.08.18

Page 13: System Interconnect 7

System Interconnect Slave PropertiesThe system interconnect connects to various slave interfaces through the main L3 interconnect, the SDRAML3 interconnect, and the seven L4 peripheral buses. After reset, all slave interfaces are set to the secure state.

Table 7-5: System Interconnect Slave Interfaces

Interface TypePrivilegeSecurityAcceptance(Read/Write/

Total)(4)

ClockInterfaceWidth

Slave

APBUser ModeBoot Secure (5)1/1/1l4_sp_clk32SP Timer 0/1

APBUser ModeBoot Secure1/1/1l4_sp_clk32I2C0/1/2/3/4

APBUser ModeBoot Secure1/1/1l4_sp_clk32UART 0/1

APBUser ModeBoot Secure1/1/1l4_sp_clk32GPIO 0/1/2

APBUser ModeBoot Secure1/1/1l4_mp_clk32Quad SPIFlash CSR

APBUser ModeBoot Secure1/1/1l4_mp_clk32SD/MMCCSR

APBUser ModeBoot Secure1/1/1l4_mp_clk32EMAC 0/1/2

OCPPrivilegedSecure1/1/1l4_sys_clk32OSC Timer0/1

APBPrivilegedSecure1/1/1l4_sys_clk32Watchdog 0/1

OCPPrivilegedSecure1/1/1l4_sys_clk32ClockManager

OCPPrivilegedSecure1/1/1l4_sys_clk32ResetManager

OCPPrivilegedSecure1/1/1l4_sys_clk32SystemManager

OCPPrivilegedSecure1/1/1l4_sys_clk32FPGAManagerData

OCPPrivilegedSecure1/1/1l4_sys_clk32FPGAManagerCSR

APBPrivilegedSecure1/1/1l4_sys_clk32DAP

(4) Acceptance is the maximum number of transactions accepted.(5) Boot Secure means the slave is in the secure state after reset.

Altera CorporationSystem Interconnect

Send Feedback

7-13System Interconnect Slave Propertiesa10_540042014.08.18

Page 14: System Interconnect 7

Interface TypePrivilegeSecurityAcceptance(Read/Write/

Total)(4)

ClockInterfaceWidth

Slave

APBUser ModeBoot Secure1/1/1l4_main_clk32DMA SecureCSR

APBUser ModeBoot Secure1/1/1l4_main_clk32DMA Non-Secure CSR

APBUser ModeBoot Secure1/1/1l4_main_clk32SPI Slave 0/1

APBUser ModeBoot Secure1/1/1l4_main_clk32SPIMaster 0/1

AHBUser ModeBoot Secure1/1/1l4_mp_clk32Quad SPIFlash Data

AHBUser ModeBoot Secure1/1/1l4_mp_clk32USB OTGCSR 0/1

AHBUser ModeBoot Secure1/1/1l4_mp_clk32NAND CSR

AHBUser ModeBoot Secure1/1/1l4_mp_clk32NANDCommandand Data

OCPPrivilegedSecure1/1/2l4_mp_clk32SD/MMCECC

OCPPrivilegedSecure1/1/2l4_mp_clk32On ChipRAM ECC

OCPPrivilegedSecure1/1/2l4_mp_clk32DMA ECC

OCPPrivilegedSecure1/1/2l4_mp_clk32Quad SPIFlash ECC

OCPPrivilegedSecure1/1/2l4_mp_clk32NAND ECC

OCPPrivilegedSecure1/1/2l4_mp_clk32USB OTGECC

OCPPrivilegedSecure1/1/2l4_mp_clk32EMACSECC

AXIUser ModeSecure13/5/18mpu_l2ram_clk64ACP

APBPrivilegedSecure1/1/1cs_pl432APB-DP

AvalonUser ModeSecure/Non-Secure

16/16/16f2h_sdram_clk[2:0]

256DDR

(4) Acceptance is the maximum number of transactions accepted.

System InterconnectAltera Corporation

Send Feedback

a10_54004System Interconnect Slave Properties7-14 2014.08.18

Page 15: System Interconnect 7

Interface TypePrivilegeSecurityAcceptance(Read/Write/

Total)(4)

ClockInterfaceWidth

Slave

AXIUser ModeBoot Secure16/16/16lwh2fpga_clk32Light WeightHPS toFPGABridge

AXIUser ModeBoot Secure16/16/16hps2fpga_clk128/64HPS toFPGABridge

AXIUser ModeSecure/Non-Secure Per 4KBregion

2/2/2l3_main_free_clk

64On ChipRAM

AXIUser ModeSecure/Non-Secure

1/1/2l3_main_free_clk

32Boot ROM

AXIUser ModeSecure/Non-Secure

1/2/2cs_at_clk32STM

APB-DP has no direct connection to the system interconnect.Note:

ClocksThe system interconnect clock is driven by the clockmanager. The system interconnect's hardware-sequencedclock is part of the NoC clock group.

Related InformationClock ManagerFormore information about theNoC clock group, refer to "Hardware SequencedClockGroups" in theClockManager chapter.

ResetsThe system interconnect has one reset signal. The resetmanager drives this signal to the system interconnecton a cold or warm reset. On reset, the boot ROM is mapped to address 0x0.

Related InformationReset ManagerFor more information, refer to the Reset Manager chapter in the Hard Processor System Technical ReferenceManual.

Functional Description of the Rate AdapterPending

(4) Acceptance is the maximum number of transactions accepted.

Altera CorporationSystem Interconnect

Send Feedback

7-15Clocksa10_540042014.08.18

Page 16: System Interconnect 7

Functional Description of the Firewall

Security

Slave Security

The system interconnect enforces security through the slave settings. The slave settings are controlled bythe NoC Security Control Register (SCR) in the observability network. Each L3 and L4 slave has its ownsecurity check and programmable security settings. After reset, every slave of the system interconnect is setto a secure state (referred to as boot secure). The only accesses allowed to secure slaves are by secure masters.

The NoC implements five firewalls to check the security state of each slave, as listed in the following table.

Table 7-6: NoC Firewalls

Default stateFunctionName

SecureFilter access to on-chip RAMOn-Chip RAM Firewall

Filter access to slave peripherals (SPs) in the following buses:

• L4 main bus• L4 master peripherals bus• L4 AHB bus• L4 slave peripherals bus

Peripherals Firewall

Filter access to system peripherals in the followingcomponents:

• L4 system bus• L4 ECC bus• DAP

System Firewall

Filter access to FPGA through the following bridges:

• HPS-to-FPGA bridge• Lightweight HPS-to-FPGA bridge

HPS-to-FPGA Firewall

Filter access to DDR SDRAMDDR Firewall

To change the security state, you must perform a secure write to the appropriate SCR register of a secureslave. A nonsecure access to the SCR register of a secure slave will trigger a response with random data. Nobus error will be triggered.

Master Security

Masters of the system interconnect are either secure, nonsecure, or the security is set on a per transactionbasis. L2 cachemasters 0 and 1, the FPGA-to-HPS bridge, DMAand theDAPperform secure and nonsecureaccesses on a per-transaction basis. All other system interconnect masters perform nonsecure accesses.

Accesses to secure slaves by unsecure masters result in a response with random data.

System InterconnectAltera Corporation

Send Feedback

a10_54004Functional Description of the Firewall7-16 2014.08.18

Page 17: System Interconnect 7

Functional Description of the Scheduler

SDRAM SchedulerAll data that ports into the multi-port scheduler must pass through the firewalls. This is the same firewallscheme that the rest of the system interconnect uses.

The multi-port scheduler has the following features:

• Single 256-bit output connected to the SDRAM L3 adapter

• Capable of issuing transactions at the memory device line rate• Traffic is comprised of aggregate inputs• Scheduler provides output using OCP protocol

• Five input connections

• 64-bit connection from the MPU• 64-bit connection from the Main L3• Two 128-bit connections from the FPGA• 64-bit connection from the FPGA

• Each input has independent firewalls

• MPU firewall has four regions• Main L3 has eight regions• Each FPGA port has four regions (up to 12 across all ports)

• Scheduler has memory timings programmed into it

SDRAM L3 FirewallsAccess to theHMC is controlled by implementingmultiple firewalls to support secure regions in the SDRAMScheduler.

All inputs into the Scheduler from the MPU, Main L3, and FPGA masters must pass through the firewall.

Exclusive MonitorAll ports of the SDRAM L3 interconnect support mutual-exclusive accesses. This requires the memory tobe configured as normal memory, shareable, or non-cacheable.

The basic process for an exclusive access is:

• A master performs an exclusive read from an address location.• At some later time, the master attempts to complete the exclusive operation by performing an exclusive

write to the same address location.• The exclusive write access of the master is signaled as"

• Successful if no other master has written to that location between the read and write accesses.• Failed if another master has written to that location between the read and write accesses. In this case

the address location is not updated.

Functional Description of the SDRAM L3 InterconnectThe SDRAM L3 interconnect serves two main functions:

Altera CorporationSystem Interconnect

Send Feedback

7-17Functional Description of the Schedulera10_540042014.08.18

Page 18: System Interconnect 7

• Provides multi-port scheduling between MPU, main L3 interconnect, and FPGA• Connectivity between the main L3 interconnect and FPGA HMC

SDRAM AdapterThe SDRAM adapter connects the SDRAM scheduler with the Hard Memory Controller.

The SDRAM adapter provides the following functionality:

• Wiring of the OCP master to hard lines to the HMC in FPGA• ECC generation, detection, and correction• Operates at memory half rate

• Matches interface frequency of the single port memory controller in the FPGA• Connectivity to the MPU, Main L3, and FPGA undergo clock crossing• Maximum frequency of 667 MHz (DDR4-2666)

• Signals a bus error if memory is accessed after calibration has failed

Width AdaptationThere are three supported data widths, 16-bit, 32-bit, and 64-bit. In each mode, the HMC adaptorautomatically converts to the appropriate data width for the Avalon® streaming (Avalon-ST) output.

The Adaptor Tail contains a pipeline stage between the IO48 signals and Adaptor signals. The IO48 is 48I/O lines configured as six 8-bit parallel I/O ports. The interface to each IO48 is running on the clock fromeach IO48 block's phase-locked loop (PLL). These clocks are synchronouswith the clocks from the two otherIO48 clocks.

ECCThe SDRAM Adapter ECC can detect and correct single-bit errors and detect double-bit errors.

The addresses are merged with data and are used for checking. This configuration detects if correct writedata is written to an incorrect location in memory; and if correct data is read from an incorrect location inmemory.

Data and address errors are detected, independently.Note:

Write Behavior

When an error is detected with write data, the following occurs:

• Address combined with write data and ECC is generated from a combined word• Partial write access results in a read-modify-write (RMW)

Read Behavior

When an error is detected with read data, the following correction occurs:

• If a single bit error is found in read data, then it is corrected• If a single/double bit error is found in the read address, then a bus error is issued• if a double bit error is found in read data, then a bus error is issued

SDRAM Adapter Interrupt Support

There are three interrupts

System InterconnectAltera Corporation

Send Feedback

a10_54004SDRAM Adapter7-18 2014.08.18

Page 19: System Interconnect 7

• The status interrupts occurs when:

• Calibration is complete.• The ECC is unable to schedule an auto-correction write-back to memory. This occurs only when the

auto-write-back FIFO is full.

• The ECC read-back interrupt occurs when an ECC single-bit error is detected in the read data. Whenthis happens, the return data is corrected and returned to the NoC.

• The double-bit or fatal error interrupt occurs when any of the following three errors happens:

• A double-bit error in the read data has been detected, which cannot be corrected.• A single-bit error has been detected in the address field. This means that the data that the Adaptor is

returning has no bit errors, but is not the requested data. When this happens, the Adaptor returns adata error along with the data.

• Any of the DDR4 devices have triggered their ALERT pins.

• Address or command parity check has failed• Write data CRC check has failed• Cannot gracefully recover because SDRAMs are not providing feedback on failure case.

ClocksAll the logic in the Adaptor and the Adaptor Tail is synchronous and effectively running off a single clock,which is provided by SDRAM PLL.

Resets

The behavior of a warm and cold reset is the same for the Adapter; but the behavior of the system once itcomes out of the reset is where there is a difference. Since the SoC cannot reset the HMC, directly, the FPGAHMC MMR is used for triggering the HMC into a reset state along with the I/O.

Functional Description of the Arbitration

Master Caching and Buffering Overrides

Some of the peripheral masters connected to the system interconnect do not have the ability to drive thecaching and buffering signals of their interfaces. The system manager provides registers so that you canenable cacheable and bufferable transactions for these masters. The system manager drives the caching andbuffering signals of the following masters:

RegisterSystem Manager RegisterGroup

Master Peripheral

l3masteremacgrpEMAC0, EMAC1, and EMAC2

l3masterusbgrpUSB OTG 0 and USB OTG 1

l3masternandgrpNAND flash

l3mastersdmmcgrpSD/MMC

At reset time, the system manager drives the cache and buffering signals for these masters low. In otherwords, the masters listed do not support cacheable or bufferable accesses until you enable them after reset.

Altera CorporationSystem Interconnect

Send Feedback

7-19Clocksa10_540042014.08.18

Page 20: System Interconnect 7

There is no synchronization between the system manager and the system interconnect, so avoid changingthese settings when any of the masters are active.

Related InformationSystem ManagerFor more information about enabling or disabling this feature, refer to the System Manager chapter in theHard Processor System Technical Reference Manual.

Arbitration

At the entry point to the system interconnect, all transactions are allocated a local quality of service (QoS).QoS specifies the transaction's arbitration priority. The system interconnect allows transactions with a higherQoS to use a greater share of system interconnect bandwidth. The transaction arbitration throughout thesystem interconnect uses this QoS value. The QoS controls for each master connected to the systeminterconnect are separated into read and write QoS priority values.

At any arbitration node, a fixed priority exists for transactions with different QoS values. The highest QoSvalue has the highest priority. If there are coincident transactions at an arbitration node with the same QoSvalue that require arbitration, then the system interconnect uses a least recently used (LRU) algorithm.

You can programmatically configure the QoS value for each master through the appropriate write_qos

register.

Quality of ServicePending

Functional Description of the Observability NetworkPending

Configuring the System Interconnect

Related InformationSoC SecurityFor more information, refer to the SoC Security chapter in the Hard Processor System Technical ReferenceManual.

Configuring the Scheduler

FPGA Port ConfigurationThe FPGA has three outputs that pass through the firewall before connecting to the SDRAM Scheduler. Thisconnection allows the FPGA to connect to the SDRAM Scheduler.

You can configure the FPGA-to-HPS SDRAM (F2SDRAM) ports to data widths of 32, 64, or 128 bits:

• F2SDRAM 0 - 32-, 64-, or 128-bit data widths• F2SDRAM 1 - 32- or 64-bit data widths• F2SDRAM 2 - 32-, 64-, or 128-bit data widths

There are four port configurations that are a combination of the SDRAM ports 0 - 2 that you are able toselect. Once a port configuration is selected, you can choose to disable ports that you do not need.

System InterconnectAltera Corporation

Send Feedback

a10_54004Arbitration7-20 2014.08.18

Page 21: System Interconnect 7

The total data width of all interfaces is limited to a maximum of 256 bits in the read direction and256 bits in the write direction.

Note:

F2SDRAM 2F2SDRAM 1F2SDRAM 0Port Configuration

32-bit32-bit32-bit1

64-bit64-bit64-bit2

128-bitunused128-bit3

64-bit32-bit128-bit4

Memory Timing ConfigurationThe SDRAML3 interconnect provides fully-programmable timing parameter support for all JEDEC-specifiedtiming parameters.

The following lists the handoff information used to control the SDRAM scheduler:

• The Scheduler is aware of the SDRAM timing so that it can guide traffic into the HMC.• This information is not used to control the Nios II subsystem, which is in control of setting up memory

timings in the HMC hardware.

Configuring the HMC Controller

Adapter Memory Mapped RegistersThe SDRAM adapter memory mapped registers (MMR) is used for configuring and reading ECC status;and configuring data width adaptation for 16-, 32-, and 64-bit data widths.

The FPGA HMC MMR can be programmed as secure or non-secure.Note:

HMC Memory Mapped RegistersThe FPGA HMC MMR is used for accessing the Nios II subsystem within the HMC to determine the stateof the interface; and for triggering the HMC into a reset state along with the I/O.

The FPGA HMC MMR can be programmed as secure or non-secure.Note:

Configuring the Observability NetworkPending

Address Map and Register DefinitionsThe address map and register definitions for this module will be available in a future document revision.

Altera CorporationSystem Interconnect

Send Feedback

7-21Memory Timing Configurationa10_540042014.08.18

Page 22: System Interconnect 7

Document Revision History

Table 7-7: Document Revision History

ChangesVersionDate

Initial release2014.08.18August 2014

System InterconnectAltera Corporation

Send Feedback

a10_54004Document Revision History7-22 2014.08.18