View
216
Download
0
Category
Preview:
Citation preview
berlin
electronicsand medical
signal processing
Student ProjectAcoustic Camera
Professor: Prof. Dr.-Ing. R. Orglmeister, TU BerlinSupervisor: Dipl.-Ing. Timo Tigges, TU Berlin
Timo Lausen, TU Berlin
Participants: Alexander Jahnel,Alexander Semmler,Daniel Schaufele,Dominik Matter,Florian Stahl,Leonard Krug,Maik Sternberg
Berlin April 2, 2015
Technische Universitat BerlinFachgebiet Elektronik und medizinische Signalverarbeitung
Institut fur Energie- und Automatisierungstechnik
Contents
Abstract 6
1 Hardware Developement 71.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.1 TI ADS1298 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.2 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.1 Mechanical Design . . . . . . . . . . . . . . . . . . . . . . . . . . 81.2.2 Circuit Development . . . . . . . . . . . . . . . . . . . . . . . . . 91.2.3 Power Considerations . . . . . . . . . . . . . . . . . . . . . . . . . 91.2.4 Signal Considerations . . . . . . . . . . . . . . . . . . . . . . . . . 91.2.5 PCB Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.2.6 Amplifier-boards . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.2.7 ADC-board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.2.8 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3 Assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151.3.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151.3.2 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151.3.3 Amplification Ratio of the amplifier-board . . . . . . . . . . . . . 161.3.4 Frequency response of the amplifier-board . . . . . . . . . . . . . 161.3.5 Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161.3.6 Possible sources of the error . . . . . . . . . . . . . . . . . . . . . 17
2 Software Development 182.1 Application Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.2 Transfer Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.3 Lightweight IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.4 UART debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.5 SPI Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.6 A/D-Converter Operation . . . . . . . . . . . . . . . . . . . . . . . . . . 282.7 DSP and A/D-Converter Setup . . . . . . . . . . . . . . . . . . . . . . . 30
3 Algorithm 343.1 Beamforming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.1.1 Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343.1.2 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343.1.3 Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.1.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2 Upsampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.2.1 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.2.2 FIR-filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.2.3 Simulation with matlab . . . . . . . . . . . . . . . . . . . . . . . 423.2.4 Implementation the FIR on DSP . . . . . . . . . . . . . . . . . . 43
2
4 Graphical User Interface 454.1 Performance of the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . 454.2 GUI structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454.3 Flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474.4 Include the Camera to Matlab . . . . . . . . . . . . . . . . . . . . . . . . 484.5 TCP/IP connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494.6 Convert data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504.7 Convert grayscale image to color image . . . . . . . . . . . . . . . . . . . 504.8 Overlay images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5 Conclusion 53
6 Annex 546.1 Inventor Drawings of the Assembly . . . . . . . . . . . . . . . . . . . . . 54
References 57
3
List of Figures
1 View of the result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 CAD-Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Schematic of the amplifier-board . . . . . . . . . . . . . . . . . . . . . . . 105 Schematic of the signal input circuitry of the ADC-board . . . . . . . . . 116 Layout of the amplifier-board . . . . . . . . . . . . . . . . . . . . . . . . 117 Schematic of the power supply circuit . . . . . . . . . . . . . . . . . . . . 128 View of the power supply circuit . . . . . . . . . . . . . . . . . . . . . . . 129 View of the ADS1298 ADC and the surrounding Caps and Resistors . . . 1310 3D-Rendering front and back of Amplifier Boards . . . . . . . . . . . . . 1311 3D-Rendering front and back of ADC-Board . . . . . . . . . . . . . . . . 1412 SPI Interface for ADS1298 . . . . . . . . . . . . . . . . . . . . . . . . . . 1513 Frequency response of an amplifier-board . . . . . . . . . . . . . . . . . . 1614 Basic App Flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2415 Available and used (red) interfaces of the OMAP-L138 experimenter board 2516 SPI Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2617 Example of SPI bus(red) and select lines(green) . . . . . . . . . . . . . . 2718 SPI Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2719 ADS1298 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2820 ADS1298 output format . . . . . . . . . . . . . . . . . . . . . . . . . . . 2921 Expansion Connector J30 . . . . . . . . . . . . . . . . . . . . . . . . . . 3022 OMAP-L138 SPI Block Diagram . . . . . . . . . . . . . . . . . . . . . . 3123 Beamforming-Principle [2] . . . . . . . . . . . . . . . . . . . . . . . . . . 3424 Delay-and-Sum-Beamforming – scan-point and source at the same posi-
tion [2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3525 Delay-and-Sum-Beamforming – scan-point and source at different posi-
tions [2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3626 Overlapped picture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3627 AC-Principle [3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3728 MATLAB simulation with sinusoidal source at (x, y) = (40, 15) . . . . . . 3829 Approach for the upsampling method by adding zeros and following low-
pass filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4030 FIR-filter structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4131 Simulation of a white noise signal by upsampling with different factors.
Here, the number of samples that are analysed with the beamformingalgorithm is 2000. The real source is located by [x,y] = [5,5] . . . . . . . 42
32 GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4533 Flowchart of the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4734 Device info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4835 Array to matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5036 Overlay images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5237 View of the front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4
38 View of the side panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5539 View of the assembled frame . . . . . . . . . . . . . . . . . . . . . . . . . 56
5
Abstract
Development of a planar acoustic microphone array with integrated USB-Camera. Means
of A/D-converting the microphone signals and supplying the data via High-Speed-SPI.
Running a beamforming algorithm on a DSP and sending the resulting frames via
TCP/IP to a matlab gui displaying the result.
Figure 1: View of the result
Figure 2: Display
6
1 Hardware Developement
To provide the dsp-board with viable input signals, additional hardware had to be
designed, assembled and tested.
1.1 Requirements
The project members agreed on a planar microphone array consisting of eight electret
microphones in a circular arrangement(Picture 1). Electret microphones were selected
for their low cost. The number of Microphones was chosen because of the availability
of 8-channel, parallel, delta-sigma-converters on a single chip (TIs ADS1298 [9]) via free
sampling. Using only one IC for ADC would facilitate the PCB-Layout while improving
reliability. The SPI-Interface would provide communication to the DSP-board. A/D-
conversion was a central aspect of the Hardware-Development, so the following section
discusses the suitability of the chip for the project.
1.1.1 TI ADS1298
Parallel sampling (for comparability of the different microphone signals) was a key re-
quirement. For this, the ADS1298 was an ideal candidate because of its eight parallel
channels and the use of delta-sigma-converters [9]. Due to the measuring principle of
delta-sigma-converters their sampling frequency is vastly higher then their actual sam-
pling rate requires and therefore anti-aliasing is suppressed effectively. The 32kSPS
sample rate combined with the 20MHz SPI-Interface was deemed capable of providing
real-time-data for the DSP (whether or not the DSP was capable of calculating the re-
sults in real time was not clear at that time, but it was decided that the A/D-front-end
should not be the bottleneck). However, 24-Bit conversion was considered excessive in
the light of the high communication bandwidth required for the amount of data produced
at this high level of precision. Also the high intensity of the anticipated noise power
would lead to a relatively low signal noise ratio and therefore the effective number of
bits would be far below 24. Apart from this, the ADS1298 is an expensive chip, designed
for medical applications and houses more facilities than the required A/D-converters.
The accompanied waste of resources in selecting this chip is acceptable for a prototype,
especially where sampling provides the IC at no cost. For an actual product, other
solutions would be beneficial.
To achieve a demonstrable performance (ca. 1m distance of sound source to array),
the low-gain output signals from the microphones would need a high amplification before
7
A/D-conversion. Furthermore, a mechanical platform for the array was needed. The
specific design was left open and is going to be discussed in the following chapter.
1.2 Design
The design was influenced by the performance requirements, the tight time table and the
limitations of both available parts and construction methods. The most affected branch
was PCB-Layout, where via-connections were not part of the production process, but
instead had to be added afterwards by soldering a fine wire to both sides of the PCB.
As this is not expected by modern day CAD-tools, it introduced errors (which had to
be fixed with improvised methods), leading to project delay. To improve the probability
of success, simplicity became the main overall design goal.
1.2.1 Mechanical Design
The algorithm-group required a 60mm radius for the circular array. This allowed placing
the USB-Camera (supplied by the instructors) in the center of the circle. The final shape
of the mechanical design was mainly influenced by the PCB-layouts and the desire to
fix all developed PCBs to the same structure as the microphone array.
Figure 3: CAD-Design
8
The mechanical design took place in the CAD-Suite ”Autodesk Inventor 2012” and
the resulting blueprints were given to the university internal workshop for manufacture.
The design consisted of PMMA, because it is a common material in prototyping due to
the ease of processing it and its appeal when finished.
1.2.2 Circuit Development
Circuit design and PCB-Layout took place using a free license of ”Eagle” by Cadsoft.
At first the circuit diagrams of both boards had to be specified. This led to two main
design considerations:
1.2.3 Power Considerations
To reduce the impact on PCB-layout, it was decided to use a single supply voltage for all
ICs. A 3.3V fixed-voltage linear regulator, driven by an external transformer generating
9V DC out of the regular 230V AC power supply, would deliver the required current.
After calculating the maximal required current, we chose the TI REG103, because it
provides the necessary output current (up to 500mA) at low noise and voltage dropout
required for our audio/signal application [5] (see also Table 1). It should be impossible
to destroy the circuit by exchanging the poles of the pin jack, so a bridge rectifier ensures
the right polarity of the supply voltage.
Amount Name current total current1 ADS1298 3.25mA 3.25mA8 tl971 3.2mA 25.6mA8 microphone 0.5mA 4mA∑
32.85mA
Table 1: Current Calculation
1.2.4 Signal Considerations
The signal flow from the microphones to the SPI-Interface was outlined in the following
manner: a) Decoupling of the DC offset resulting from the supply voltage at the electret
microphone and amplification of the remaining ac-signal as close to the source as possible.
This is done by means of a single inverting amplifier, since we are only interested in the
amplitude of a sine signal, we chose the inverting amplifier over the non-inverting to save
parts. According to the datasheet of the electret microphone we used a 2.2kΩ resistor
9
to ensure optimal impedance matching [6]. Afterwards the DC offset from the supply is
decoupled by a 100nF capacitor and then combined with the virtual V DD2
at the negative
input of the OpAmp. V DD2
is achieved by means of a voltage divider consisting of two
10kΩ resistors between supply voltage (3.3V) and GND. A single operational amplifier
was considered enough in our case, because trials had shown that the tl971 managed a
gain of 1000 in a inverting amplifier setup and has a gain–bandwidth product of 12Mhz
which should leave us with an effective bandwidth of 12kHz [12]. The gain is adjust by
the two resistors R4 and R6 according to the following formula Av = R4R6
= 1.2MΩ1.2kΩ
= 1000.
The inverting amplifier is followed by a highpass filter to reduce noise and to decouple theV DD
2DC voltage offset again. The highpass filter has a cutoff frequency fc of 160Hz as
shown by the the following calculation - fc = 12∗π∗R∗C = 1
2∗π∗10∗103∗100∗10−9 = 159.15Hz.
This frequency was chosen to dampen the noise generated by the voltage-supply net
with a frequency of 50Hz and up to the third harmonic.
Figure 4: Schematic of the amplifier-board
b) Since the inverting amplifier circuit adds a voltage offset, to make sure that the
amplified signal is within the ranges of the supply voltage of the OpAmp, we decouple
it before transmitting the amplified signal to the ADC-board. Therefore we used a
highpass behind the inverting amplifier, which also reduces the noise generated by the
50Hz supply net. As a result we had to add a DC offset to the signal on the ADC-
board to ensure that the non-excited voltage would be right in the middle between
GND-potential and the upper reference voltage of the ADC (2.4V). This was done by
means of a voltage-divider between the supply voltage of 3.3V and GND. We used 100kΩ
resistors against supply voltage and 55.5kΩ resistors (a 100kΩ resistor in parallel to a
120kΩ resistor) against ground to have a stable 1.2V potential difference.
10
Figure 5: Schematic of the signal input circuitry of the ADC-board
c) The interface to the DSP board should be SPI, because it was the best suited
interface provided by the board for a close range connection able to handle the amount
of data produced by the ADC, without adding any overhead.
To allow amplification directly at the microphone output, it was decided to design a
separate PCB for each microphone to be placed in close proximity.
1.2.5 PCB Design
Two different PCBs were to be developed: Eight amplifier-boards, one for each micro-
phone and one ADC-board holding the 8-channel ADC.
1.2.6 Amplifier-boards
As already mentioned we chose to use an inverting amplifier setup, followed by a
highpass-filter for the amplifier boards. The main design objective for the amplifier-
boards was to minimize space and wire lengths.
Figure 6: Layout of the amplifier-board
11
1.2.7 ADC-board
The design of the power-supply was based on the following assumptions: We wanted
to use an already existing 230VAC-9VDC transformer which defined the pin jack. As
already mentioned a rectifier followed by a fixed voltage regulator should ensure the 3.3
voltage supply and lastly a 0RΩ resistor should enable the user to separate the analog
and digital voltage supplies of the ADS1298. Whilst the digital voltage supply is limited
to 3.3V anyhow, the analog supply voltage may be up to 5V (if an external voltage
supply is added).
Figure 7: Schematic of the power supply circuit
During the design of the power-supply we mainly cared for great spaces of continuous
copper to make sure that the resulting heat and current can be transported easily.
Figure 8: View of the power supply circuit
As it is to be seen in the snapshot from the design-tool below, the main goal during
the placement of the ADC and its adjacent capacitors and Resistors was it to place the
capacitors (especially those with a lower capacitance) as close to the IC as possible. Also
the distance between signal-input, voltage-dividers and the ADC were kept as short as
possible to minimize cross-talking.
12
Figure 9: View of the ADS1298 ADC and the surrounding Caps and Resistors
To improve the understanding of the Design of the boards we generated the following
3D-renderings:
(a) (b)
Figure 10: 3D-Rendering front and back of Amplifier Boards
13
(a) (b)
Figure 11: 3D-Rendering front and back of ADC-Board
1.2.8 Interfaces
The following Interfaces between the different components were specified:
The connections between the microphones and the amplifier-boards are one pair of
twisted pair wires per microphone. These are soldered to the pins of the microphone
and end in a female 2x1 Molex connector on the amplifier-boards. Since the electret-
microphone mainly consists of a FET with an open-source Output one ground and a
power connection are sufficient, because the signal is modulated on top of the power
connection.
The amplifier-boards themselves are connected with two pairs of twisted pair wires to
the ADC-board. One pair is solemnly for power-supply, whilst the other pair transmits
the amplified microphone signal (not differential). These cables are attached to the
amplifier-boards with 4x1 female Molex connectors and end on two SUB-D connectors
(one for power, one for signal) on the ADC-board.
14
Figure 12: SPI Interface for ADS1298
Lastly the connection between the ADC-board and the DSP-board consists of a flat
ribbon cable carrying the SPI-bus and the common ground. This cable starts out as a
female Molex connector, as shown above, on the ADC-board and ends on three separate
connectors for the DSP-board. This is due to the mapping of the required pins on the
adapter-board for the DSP-board, which we did not design nor assemble ourselves.
1.3 Assembly
1.3.1 Setup
Due to the aforementioned lack of vias we had to place the signal-SUBD connector on
spacers and solder small wires to both sides of the PCB. Also some parts of the copper
were so damaged, we had to coat some of the connections with tin. Additionally the
chosen footprints of the power pin jack and the rectifier proved to be badly chosen, since
we were not able to find matching parts in time and had to improvise.
1.3.2 Testing
During the commissioning of the ADC we noticed that the negative input of the differ-
ential signal input of the ADS1298 was not supplied with GND from the ADC-board
directly, but in fact via the shielding of the twisted pair wires used to connect the
different boards. We connected the negative signal pins of the corresponding SUBD
connector with the GND plain of the ADC-board by soldering a wire across all the pins
and the GND plain. Despite that we did not encounter other grave problems while
15
testing the setup and the group handling the SPI-interface confirmed the usability of
the arrangement.
1.3.3 Amplification Ratio of the amplifier-board
During the development of the amplifier-board we ran multiple tests, primarily to ensure
that the resulting signal would have a good SNR to prevent artifacts in the future camera-
picture, but also to see if the resulting would not be saturated all the time. The results
were disturbing at first, because high frequent modulated signals around 100MHz lead
to aliasing effects and e a high amount of noise. However those signals ceased to exist
after a couple of weeks and a gain of 1000 lead to a clear signal in more recent tests.
1.3.4 Frequency response of the amplifier-board
1.3.5 Measurement
Using a single speaker facing the microphone array at around 40 centimeters distance,
driven by an online-sine-tone-generator via laptop, we determined the frequency response
of one amplifier-board in 100 Hertz steps ranging from 100Hz to 10kHz. The test setup
was chosen to mimic the real use case of the microphone array.
Figure 13: Frequency response of an amplifier-board
16
As it is to be seen the passband ranges from 300Hz to around 8kHz. Whilst the low
frequencies are left out intentionally to reduce the noise produced by the 50Hz supply-net
the high dampening of frequencies between 8kHz and 12kHz (due to the gain-bandwidth
product of the OpAmp) is not intended. Although the electret-microphone is able to
convert noises with a frequency of up to 16kHz the amplifier-circuit was never intended
to work on the whole frequency range, because the microphone array (by its radius) was
designed to perform at a frequency of around 5kHz.
1.3.6 Possible sources of the error
The laptops audio channel was set to a flat frequency-response to provide viable signals
to the speaker. We measured the RMS-voltage of the signal using an oscilloscope, where
low voltage signals below 50mV could not be distinguished from noise. We assumed the
signal voltage to be zero in those cases, because the frequencies would clearly be outside
of the passband of the amplifier-circuit. That decision leads to the very steep transitions
from passband to stopband in the figure shown above.
17
2 Software Development
2.1 Application Flow
The OMAP-L138(Open Multimedia Applications Platform) is a dual core applications
processor. It features an ARM9 processor core and a C674x series digital signal pro-
cessor(DSP) on the same die. It is possible to run a operating system on the ARM
processor and outsource heavy processing task to the signal processor.
In this project, only the resources of the DSP are used, since the primary purpose was to
do calculations, in a single process context. After the controller finished booting to its
main routine, a few basic setup operations must be performed. Figure 14 shows the ba-
sic states of the program running on the processor. After the setup phase is completed,
the system waits for a network connection. If a connection is established, the controller
executes consecutively the following routines:
1 void ADS1298ReadBlock(void);
2 void Beamforming(float *samples , float* result);
The method ADS1298ReadBlock reads a number of samples into an array. This block
of samples is passed to the Beamforming algorithm, which generates the final pixel map.
Due to the lack of a display, the computed data has to be transferred to the PC front-end
for presentation to the user. Different interfaces were considered for this task.
The most widely used interface for DSP-PC communication is Universal Asynchronous
Receiver Transmitter (UART). However, the available data rates are very low (only
115.200 kBit/s at the highest speed). In the beginning of the project, we planned to
have a much higher resolution, so that the transfer of a single frame would have taken
several seconds. This makes UART a very unpractical choice. However, we used UART
to have an error printout, which was very useful for debugging.
Another choice is Universal Serial Bus (USB). This interface is readily available at
every PC and the DSP has the appropriate interfaces. However, USB uses a rather com-
plicated protocol stack and without the use of an operating system, the implementation
would be very complicated, so this was ruled out as well.
Several other interfaces like Serial Peripheral Interface (SPI) and Inter-Integrated
Circuit (I2C) would be very easy to implement at the DSP side, but the PC has no
built-in interface for these protocols, so an additional adapter would have to be used,
which was also deemed unpractical.
18
The OMAP-L138 experimenter board also features a built-in Ethernet interface. The
implementation on the PC side is extremely simple, because MATLAB has built-in
TCP/IP sockets. On the DSP side, the lightweight IP (lwIP) library can be used and
will be described in the following section. Several example projects are shipped with the
TI Starterware and further facilitate the implementation. Thus, the ethernet interface
was chosen for communication with the PC. A simple crossover patch cable was used,
to remove the need for a complex network architecture with a network switch.
The available and used interfaces are shown in fig. 15.
2.2 Transfer Protocol
When starting the DSP, the network interface is created and is assigned the static IP
address 192.168.247.1 with the network mask 255.255.255.0. Then a TCP server is
started, that listens at port 2000.
When a client opens a connection to the TCP server, the recording and data processing
is started and the frames get transmitted as soon as they are computed. This process
continues until the client disconnects.
The data format is designed to be as simple as possible with a minimum overhead,
while allowing the transmission of all necessary data and some flexibility in the pa-
rameters. For this purpose a simple header was defined with 4 values of 2 Bytes each.
Afterwards the frame data gets transmitted row-first with 1 Byte per pixel. This can be
implemented as a simple C-struct as shown in listing 1.
19
Listing 1: Frame structure
1 struct frame
2
3 uint16_t width;
4 uint16_t height;
5 uint16_t dA;
6 uint16_t dE;
7 uint8_t data[WIDTH*HEIGHT ];
8 ;
The first two values contain the width and height of each frame in pixels, transmitted
as 16 Bit unsigned integers. Because of the rather high dynamic range of the output in
different environments, a high number of bits would have to be used to transmit each
pixel. Instead, the minimum and maximum value of all pixels in one frame is computed
and saved in the dA and dE fields. Afterwards the output data gets transformed with
eq. (1) and transmitted as 8 Bit unsigned integers. At the receiver, this transformation
can be reversed with eq. (2). This way, the data rate is just 1 Byte per pixel, while
allowing a very high dynamic range in different situations.
x′ = round
(x−min
max−min· 255
)(1)
x = x′ · max−min
255+ min (2)
20
2.3 Lightweight IP
Contrary to the sample projects, the lwIP code had to be copied to our project, because
some configuration options (e.g. the static network address) had to be changed, which
would not be possible otherwise. The path to the header files had to be changed, because
otherwise the files could not be compiled.
The public interface of the network module consists of the following methods:
Listing 2: Public interface of network module
1 void NetworkSetUp(void);
2 void NetworkSendData(const void* data , size_t bytes);
3 bool NetworkIsConnected(void);
The NetworkSetUp method sets the correct settings for Pin multiplexing, enables the
power of the ethernet module and calls several submethods for further initialization.
The network interrupts are enabled and mapped to the methods EMACCore0RxIsr and
EMACCore0TxIsr, which call the interrupt handler routines of the lwIP library. After
the lwIP library is initialized and set to a static IP address, the TCP listening socket is
created and the tcpAcceptConnection method is used as a callback for new accepted
TCP connections.
When a client connects to the TCP server, the tcpAcceptConnection callback gets
called and the socket gets saved to the global variable tcp_pcb. When there is already
another connected client, an error message is given instead, because for simplicity of
implementation only one connection is allowed at each time. The tcpErr method is
used as error callback and is used to close the connection, when the client disconnects.
The tcpDataSent callback waits for the complete transmission of the recent chunk of
data and starts the transmission of the next chunk, when there is still unsent data in
the buffer.
The NetworkIsConnected method just checks, whether currently a client is connected
to the TCP server, to start and stop the data recording and processing when a client
connects and disconnects.
When the beamforming algorithm wants to transmit the data to the PC, it calls
the NetworkSendData method. This method prints an error if no client is currently
connected or the previous frame was not fully transmitted. Otherwise a new buffer is
allocated with the malloc method, the data gets copied and the tcpDataSent method
gets called, to transmit the first chunk of data.
The tcpDataSent method calls the lwIP tcp_write method with as many bytes as
21
fit in the TCP send buffer and then checks for errors. If the transmission of the frame
is complete, the buffer gets cleared. In the end the tcp_output method gets called,
to trigger the immediate transmission if the data. Otherwise the data might be in the
TCP send buffer for some time and several frames might be transmitted simultaneously,
which would not be very useful.
22
2.4 UART debugging
For debugging purposes, the UART interface was used. Several convenient methods are
provided, to easily print text messages to a PC, that is connected with a null modem
cable. In a production environment, this cable can be simply omitted and the program
still works without modifications. It has to be noted, that the transmission of long strings
in time-critical parts should be avoided, because the transmission is implemented via
polling and blocks until the data has been transmitted.
The public interface of the UART module consists of the following method:
Listing 3: Public interface of UART module
1 void UartSetUp(void);
2 void UartCharDisplay(unsigned char ch);
3 void UartWrite(char* message);
4 void UartPrintf(const char* format , ...);
The UartSetUp method sets the correct settings for pin multiplexing and and enables
and configures the UART module of the OMAP-L138 experimenter board.
For sending text, several methods are available. The UartCharDisplay method trans-
mits a single character and gets called continuously by the UartWrite method, that can
transmit a zero-terminated string.
For convenience, the UartPrintf method is provided, which features a printf-like
interface. This method uses vsnprintf to print the string to a buffer and then calls
UartWrite. For this reason, the output string is restricted to 1024 characters and gets
truncated, if it is too long.
These methods are used in several places in other modules, to print out error messages
or give information about the state of the program.
23
Switched Central Resource (SCR)
1024KB L2 ROM
256KB L2 RAM
32KBL1 RAM
32KBL1 Pgm
16KBI-Cache
16KBD-Cache
AET4KB ETB
C674x™DSP CPU
ARM926EJ -S CPUWith MMU
DSP SubsystemARM SubsystemJ TAG Interface
System Control
InputClock(s)
64KB ROM
8KB RAM(Vector Table)
Power/SleepController
PinMultiplexing
PLL/ClockGenerator
w/OSC
General-Purpose
Timer (x3)
Serial InterfacesAudio Ports
McASPw/FIFO
DMA
PeripheralsDisplay
SharedMemory
LCDCtlr
128KBRAM
External Memory InterfacesConnectivity
EDMA3(x2)
Control Timers
eHRPWM(x2)
eCAP(x3)
EMIFA(8b/16B)NAND/Flash16b SDRAM
DDR2/mDDRMemory
Controller
RTC/32-kHzOSC
I C(x2)
2 SPI(x2)
UART(x3)
McBSP(x2)
Video
VPIF
ParallelPort
uPP
EMAC10/100
(MII/RMII)MDIO
USB1.1OHCI Ctlr
PHY
USB2.0OTG Ctlr
PHYHPI
MMC/SD(8b)(x2)
SATA
CustomizableInterface
PRUSubsystem
Figure 15: Available and used (red) interfaces of the OMAP-L138 experimenter board
25
2.5 SPI Communication
SPI(serial peripheral interface)is a simple clock-synchronous, high-speed, chip to chip
communication interface. Its primary use case is communication interface between de-
vices and peripherals. A lot of high throughput peripherals like A/D-converters and
sensors provide this interface as a means of data exchange with a host controller. Today,
almost all microcontrollers and embedded systems provide at least one SPI module. Its
primary advantage is a high throughput rate(Clock Rate is not limited by specification)
at the cost of a relatively high signal count compared to interfaces like I2C. This interface
uses a single master - multiple slave bus structure and it works in simplex and duplex
modes. In duplex mode, a minimum of four signals are required. Each slave is selected
individually, one at a time for each transaction. Therefor each additional slave requires
additional signal, called chip/slave select.
Signal DescriptionClock Data valid
MOSI/DOUT Master Out/Slave InMISO/DIN Master In/Slave Out
CS Chip Select(active low)
Figure 16: SPI Signals
It is possible, to share the Clock, MOSI and MISO signals with all slave devices in a
bus like fashion. This layout depends on how the slaves control their MISO-port. If each
slave is able to tri-state(high impedance) this output, a bus structure can be chosen and
each device must ignore all communication, if not activated.
Figure 17 shows an example of a SPI bus layout. A SPI transaction is always started
by a master device. It must pull the corresponding chip select line low to indicate a start
condition to a slave. After an amount of bits are transferred, a stop condition is issued
by pulling it back up again. The master device must provides a clock signal to the slave
device. There exists four modes of data synchronization to the clock, depending on its
edge and idle state. This happens either on a rising(↑) or a falling(↓) clock edge. See
Table 18 for an overview. It is possible to transmit words of variable bit sizes, but most
devices limit the range from 8 up to 24 bits per transaction.
There are some (minor) disadvantages compared to other serial interfaces. The signal
count is a linear function of the number of slave devices. The master does not know, if
transaction was successful, because of a missing protocol, there is no hand shaking.
26
Figure 17: Example of SPI bus(red) and select lines(green)
Clock Normal read MISO set MOSIPhase=0 ↑ ↓Phase=90 ↓ ↑
Clock Inverted read MISO set MOSIPhase=0 ↓ ↑Phase=90 ↑ ↓
Figure 18: SPI Modes
27
2.6 A/D-Converter Operation
The ADS1298 needs to be properly configured, before any conversion can be started.
This is done with a set of various commands. See Table 19 for a reference.
Command DescriptionWake up Exit standby modeStandby Enter standby mode
Reset Reset ConverterStart Start a conversionStop Stop a conversion
Read Continuous Enter continuous modeStop Continuous Stop continuous mode
Read Data Read conversion resultRead Register Read from registerWrite Register Write to register
Figure 19: ADS1298 Commands
The converter operates in three different states:
1. Idle Mode
2. Standby
3. Read Data Continuous Mode
When power is first applied, it resets into state 3. In this state, all commands except for
Start, Stop and Stop Continuous are ignored and the data ready signal toggles with the
programmed sample rate. This signal can be used to synchronize the OMAP-L138 with
the converter. In order to change the sample rate, diff-amp gain and other parameters,
one has to change the converter state to Idle mode. The firmware provides a feature
rich interface, to control the converter operation and configuration. Please see Listing 4
for a reference.
Figure 20 shows the output format after a conversion. It consists of 9 words with a
word size of 24 Bits. The first word is the converter status followed by 8 words of sample
data.
28
Listing 4: Interface of ADS1298 module
1 int ADS1298ReadReg(int reg);
2 void ADS1298WriteReg(int reg , int data);
3 int ADS1298DataReady(void);
4 int ADS1298GetDeviceID(void);
5 void ADS1298Wakeup(void);
6 void ADS1298EnterStandby(void);
7 void ADS1298Reset(void);
8 void ADS1298Start(void);
9 void ADS1298Stop(void);
10 void ADS1298StartSingleRead(void);
11 void ADS1298StartContRead(void);
12 void ADS1298StopContRead(void);
Figure 20: ADS1298 output format
29
2.7 DSP and A/D-Converter Setup
The firmware provides a method to initialize the OMAP-L138 and ADS1298.
1 void ADS1298SetUp(void);
It has two major tasks:
• Configure OMAP-L138 SPI module
• Initialize ADS1298
The Switched Central Resource(Figure 15) of the OMAP-L138 needs to be configured,
in order to connect the DSP subsystem with the SPI module. See Figure 15.
The programming interface uses the SPI1 module, because the development board only
exposes this instance via the audio expansion header J30(See Figure 21).
Figure 21: Expansion Connector J30
The following table shows the pin mapping:
Function(Direction) OMAP-Pin Expansion-Pin ADC-Board-PinMOSI(O) SPI1 SIMO 63 DINMISO(I) SPI1 SOMI 65 DOUTClock(O) SPI1 CLK 64 CLK
Chip Select(O) /SPI1 SCS[1] 95 /CSData Ready(I) GP1[11] 90 /DR
30
The OMAP-L138 has a sophisticated SPI module, with the following features:
• Variable word size up to 16 bits
• Automatic toggling of chip select lines
• Timer to control setup and hold timing requirements
• Provide and receive data stream via Direct Memory Accessing
• Enable input to auto-start a transaction
• Registers for different word sizes or clock rates
The Basic structure is shown in Figure 22.
Figure 22: OMAP-L138 SPI Block Diagram
As mentioned in the features, it is possible to create multiple SPI-configurations.
This is very handy, because the converter expects 8 bits for each command, but 216 Bits
for the result word transaction. The OMAP-L138 supports 4 different configurations
through 4 independent registers. Since the receive and transmit shift registers store 16
bits, the following configurations are created:
1 //set spi clock rate
2 // SPIClkConfigure *(BASEADDR ,PLL_CLK ,SPI_CLK ,
FORMAT_REGISTER)
3 SPIClkConfigure *( SOC_SPI_1_REGS , 150000000 , 15000000 ,
SPI_DATA_FORMAT0);
31
4 SPIClkConfigure *( SOC_SPI_1_REGS , 150000000 , 15000000 ,
SPI_DATA_FORMAT1);
5
6 //set spi clock phase =90 non inverted
7 SPIConfigClkFormat *(BASEADDR ,CLK_POL/CLK_INPHASE ,
FORMAT_REGISTER)
8 SPIConfigClkFormat *( SOC_SPI_1_REGS , SPI_CLK_POL_LOW ,
SPI_DATA_FORMAT0);
9 SPIConfigClkFormat *( SOC_SPI_1_REGS , SPI_CLK_POL_LOW ,
SPI_DATA_FORMAT1);
10
11 //set word size
12 // SPICharLengthSet *(BASEADDR ,BITS ,FORMAT_REGISTER);
13 SPICharLengthSet *( SOC_SPI_1_REGS , 8, SPI_DATA_FORMAT0);
14 SPICharLengthSet *( SOC_SPI_1_REGS , 16, SPI_DATA_FORMAT1);
To select the appropriate format register, the following method is used:
1 SPIDat1Config *( SOC_SPI_1_REGS , SPI_DATA_FORMAT0 ,
SPI_SELECT_NCS1);
To transmit data, the following method is used:
1 SPITransmitData1 *( SOC_SPI_1_REGS , data_word);
Methods marked with a asterisk are wrapper methods to access the actual peripheral
hardware registers. They are provided by the StarterWare [7] Firmware Library from
Texas Instruments.
The last step is to configure the converter. For best performance it is required to sample
at the maximum rate possible. The ADS1298 has a maximum sample rate of 32000
samples per second.
After reset, it falls back to the following settings:
• Sample rate of 250 samples per second
• Gain on all channels of 15dB
• ADC voltage reference buffer turned off
The following code snippet shows the required converter initialization steps.
32
1 ADS1298SetUp:
2 /*SPI CODE END*/
3
4 /* ADS1298 CODE BEGIN*/
5 int r;
6 // Reset and wait for some time
7 ADS1298Reset ();
8 //After reset , ADS1298 is in READC mode
9 //and no configuration is possible
10 ADS1298StopContRead ();
11 // sanity check , read dev -id
12 r = ADS1298GetDeviceID ();
13 UartPrintf("ads1298 id: %i\n",r);
14 if( r != ADS1298_DEVICEID )
15 CriticalErrorSystemHalt ();
16
17 //set sample rate to 32kSps
18 ADS1298WriteReg(CONFIG1_BASE , CONFIG1_B7_HR);
19 //set adc voltage reference buffer enabled
20 ADS1298WriteReg(
21 CONFIG3_BASE ,
22 CONFIG3_B7_NPD_REFBUF |
23 CONFIG3_B6_ONE |
24 CONFIG3_B0_RLD_STAT);
25 // Configure diff amp gain to x1(0dB) for all
channels
26 for(i=CH1SET_BASE;i<= CH8SET_BASE;i++)
27 ADS1298WriteReg(i, CHNSET_B4_GAIN0);
28
29 // Go back to READC mode
30 ADS1298StartContRead ();
31 /* ADS1298 CODE END*/
33
3 Algorithm
3.1 Beamforming
3.1.1 Preface
The following chapter describes the employed beamforming algorithm, which forms a
central part of the acoustic camera. In this project a time-domain-based algorithm is
used. This means that all computations are made with the pure digitalized signals mea-
sured by the microphones. Besides time-domain-based algorithms there are also those
which operate in frequency domain, which comes with other advantages and disadvan-
tages.
For the acoustic camera in this project a Delay-and-Sum-beamforming algorithm is used.
This algorithm is easy to implement and includes the whole signal bandwidth, as op-
posed to frequency-based algorithms which disassemble the signal into its particular
frequency components. It is also better suited to work with short transient signals [8].
The equations are taken from [3] and were first implemented and tested in Matlab before
written on the DSP in C.
3.1.2 Principle
Figure 23 shows the basic principle of the beamforming algorithm. There is a sound
Figure 23: Beamforming-Principle [2]
34
source on the left from which the signal propagates through space before it is detected
by a microphone array. This array can have different shapes for different applications
[10]. It is clear that the sound does not arrive simultaneously at the microphones but
delayed, depending on the distance between source and microphone. It is ultimately
this delay which makes it possible to reconstruct the source position. To achieve this,
every microphone output signal is first shifted a certain number of samples depending on
the current scan point before all signals are summed up (Delay-and-Sum-beamforming).
This leads to a precise beam which is capable of scanning predetermined points in a
target area.
Figures 24 and 25 show two different scenarios. The former depicts a setting where
Figure 24: Delay-and-Sum-Beamforming – scan-point and source at the same position[2]
the current scan-point is identical to the source. Accordingly the sum of all shifted
signals lead to constructive interference and a large output signal. The latter, on the
other hand, presents a scenario where the scan-point does not match the source. As
a result, the delayed and summed signals cancel each other out, leading to a smaller
output signal.
There are many of these points in a predefined area to be scanned by the camera. For
each point an intensity value is computed which then forms a two-dimensional intensity
matrix. The values of the matrix are illustrated in color and finally overlapped with a
black-and-white picture of a USB-Webcam. Figure 26 shows an example with a white-
noise source.
35
Figure 25: Delay-and-Sum-Beamforming – scan-point and source at different positions[2]
Figure 26: Overlapped picture
36
3.1.3 Equations
The following equations are explained by means of Figure 27 [3].
Figure 27: AC-Principle [3]
First, the delays between each single point p on the scan area and the different micro-
phones mi are computed:
δi(p) =fsc||p−mi|| (3)
where || · || stands for the Euclidean norm and thereby ||p −mi|| the distance between
the point p and the i-th microphone. The delays are measured in samples, which means
that δi(p) specifies the number of steps to shift the appropriate signal. fs is the sampling
frequency and c is the speed of sound. The result is an i× p - dimensional matrix.
The second equation shifts the signals and adds them together.
s(p)[k] =
Nmic∑i=1
si[k − δip]. (4)
The resulting signal is a two-dimensional intensity-matrix for all points in the target
area. Since such a matrix exists for each sample index, it is appropriate to average the
signal over time.
3.1.4 Implementation
As the used Analog-Digital-Converter only had eight input channels, the number of
microphones was also limited to eight. Hence the decision was made to collocate the
microphones in a circle in order to achieve a symmetrical structure. The minimal mi-
crophone distance was defined to 3 cm, resulting in an upper frequency limit (spatial
Nyquist frequency) of fλ/2 = cd
=340m
s
0.06m≈ 5.7 kHz. The lower frequency limit is deter-
mined by the stored number of samples from each iteration. To process three periods of
37
a 50 Hz signal it is necessary to store a signal with a length of t = 350Hz
= 60ms. This
is equivalent to n = t · fs = 60ms · 32 kHz = 1920 samples, with fs standing for the
sample frequency.
The number of microphones and also the arrangement of the array have a big impact on
the functionality and quality of the acoustic camera (e.g. shape of main- and sidelobes)
[10]. Therefore, for future improvement it would be essential to first determine the
intended application of the camera and the corresponding parameters more precisely.
Figure 28 shows the results of a MATLAB simulation in which a sinusoidal source is
located at (x, y) = (40, 15). While the location of the source is clearly depicted, there
Figure 28: MATLAB simulation with sinusoidal source at (x, y) = (40, 15)
is also a big variance and obvious artifacts. Both could be reduced by optimizing the
arrangement of the microphones and by filtering the signal.
38
3.2 Upsampling
The calculation of local localization from an audible signal is implemented with the most
simplest algorithm called delay-and-sum-beamforming. It depends on many degrees of
freedom. On the one hand the bandwidth of an analysed signal is dependent on the
physical structure (arrangement of microphones), on the other hand the sampling fre-
quency must be considered as well. In the process the number of recorded samples per
computational steps has to reach an optimal value for a real-time capable system, due
to the limit of computing power by the DSP.
The algorithm calculates the localization of a signal by using a simple geometry with
vectors. In this connection the calculation of a propagation delay δ for a signal to the
scattered space points is important. This propagation delay depends on the distance
from the source p to the microphone m but also on the selected sampling frequency fs.
δ(p) =fsc‖p−m‖ (5)
Discrete signals can process only an even number for the propagation delay which
requires an integer type. Therefore, the propagation delay will be rounded when calcu-
lating the result. For adjacent points there is only a variation between the propagation
delays behind the decimal point which makes it impossible to distinguish the real location
of the source. Accordingly, there is an impact of the sampling frequency on the accuracy.
Increasing the sampling frequency leads to an improved accuracy, because the differ-
ence for the propagation delay between the adjacent points is bigger. The disadvantage
is the increasing sample number for recorded signals. As a result the record length of
the signal must be reduced and thus the lower limit for the bandwidth is increased. For
different applications it is advisable to consider which results will be achieved.
The system of recording has a maximum possible sampling frequency of 32 kHz. Nev-
ertheless to increase the sampling frequency the method of so-called upsampling is used.
39
3.2.1 Approach
With upsampling the signal is extended by interpolation, adding additional data points
and following low-pass filtering (figure 29). The used method for upsampling is described
as following:
For example, to double the sampling frequency a new data point with the value of zero
is added between the present data points. Subsequently, the resulting signal is low pass
filtered by a FIR-filter whose cut-off frequency is defined by the Nyquist frequency of
actual oversampling, in this case by 16 kHz. The filtering ensures that all higher fre-
quency parts are suppressed and the inserted zeros are interpolated to the adjacent data
points. Thereby a signal is obtained which has the original frequency spectrum but an
increased sampling frequency.
The same principle takes place for the n-fold extension of sampling frequency under
consideration that the number of data points is multiplied by the factor n. Further the
filter has a time shift whose value depends on the order.
Quelle: http://www.dsprelated.com/showarticle/198.php
Figure 29: Approach for the upsampling method by adding zeros and following low-passfiltering
40
3.2.2 FIR-filter
The properties of a FIR-filter structure are suitable for the purpose. The FIR-filter has
a finite impulse response and a linear phase response. The structure of a FIR-filter is
shown in figure 30. The input signal x(n) is multiplied with the filter coefficients bN
(impulse response) and summed up to the output y(n) by every time step.
The beamforming algorithm runs in the time domain. To keep the computational com-
plexity as low as possible, the FIR-filter is implemented in the time domain, too. A
transformation into the frequency domain would mean to transform all 8 microphone
signals and then re-transforming them after filtering.
Quelle: http://commons.wikimedia.org/wiki/File:FIR Filter.svg
Figure 30: FIR-filter structure
The impulse response is required and calculated by equation 6 with the sampling fre-
quency fs. The order n of the filter determines the amount of additional attenuation for
frequencies higher than the cut-off frequency. With a higher order the filter consequently
gets a higher time delay (equation 7).
g(n) =sin(2πfg · (nTs − τ))
π · (nTs − τ)(6)
mit
τ =n
2· Ts (7)
und
Ts =1
fs(8)
41
3.2.3 Simulation with matlab
To verify the upsampling algorithm, it first is simulated with matlab before it gets im-
plemented on the DSP. Therefore a white noise signal is used. The pattern of scan points
has a size of 13x15 with a distance of 2 meters. The source is located by the point [x,y]
= [5,5].
Figure 31: Simulation of a white noise signal by upsampling with different factors. Here,the number of samples that are analysed with the beamforming algorithm is2000. The real source is located by [x,y] = [5,5]
Figure 31 shows how the accuracy is improved by increasing the upsampling factor.
For a source localization without upsampling the analysing is more or less incorrect
under the given conditions. By using the upsampling method, the result improves with
an increasing factor.
42
This method is implemented on the DSP as an option. Unfortunately it did not work
properly by the time the report has been written.
3.2.4 Implementation the FIR on DSP
To create a filter response the below c-code is implemented on the DSP. The coefficients-
array length (upsample coeff) depends on the order (UPSAMPLE FIRORDER) of the
filter. There is one specific anomaly that must be observed by the calculation. Also, the
calculation can possibly be divided by zero. This step has to be detected and the value
needs to be set at the double cut-off frequency.
Listing 5: Create filter coefficients (c-code)
1 void UpsamplingSetup(void)
2 uint32_t k;
3 // Create Filter Response = Coefficient -Array
4 float sum_coeff = 0.0f;
5
6 for(k=0; k<UPSAMPLE_FIRORDER; k++)
7 float z = ((( float)k+1) *
UPSAMPLE_SAMPLE_RATE) - UPSAMPLE_TAU;
8 upsample_coeff[k] = sin(2 * PI * z * (
float)UPSAMPLE_FIRFG ) / (PI * z);
9
10 if( ((k+1)*UPSAMPLE_SAMPLE_RATE) ==
UPSAMPLE_TAU)
11 upsample_coeff[k] = 2 *
UPSAMPLE_FIRFG;
12
13 sum_coeff += upsample_coeff[k];
14
15 for(k=0; k<UPSAMPLE_FIRORDER; k++)
16 upsample_coeff[k] = upsample_coeff[k] /
sum_coeff;
17
18
43
The FIR-filter function becomes an array with input samples and generates an ar-
ray with the filtered output samples. In every calculation step, a number of available
coefficients is multiplied with the same number of input samples. Furthermore, this
describes a window with a number of samples, which is shifted through the input signal
by incrementing it after calculating an output sample.
Listing 6: FIR filter (c-code)
1 static void FIR_filter(float *input , float *output)
2 int i = 0, k = 0;
3
4 //reset delay line
5 for(i=0; i < UPSAMPLE_FIRORDER; i++)
6 delayline[i] = 0.0f;
7
8 for(i=0;i<UPSAMPLE_BUFFER_SIZE;i++)
9 float acc = 0;
10
11 k = UPSAMPLE_FIRORDER -1;
12 do
13 delayline[k] = delayline[k-1];
14 while(k-- > 1);
15
16 delayline [0] = input[i];
17
18 for(k=0; k<UPSAMPLE_FIRORDER; k++)
19 acc += upsample_coeff[k] * delayline[k];
20
21 output[i] = (float)UPSAMPLING_FACTOR*acc;
22
23
44
4 Graphical User Interface
Through the Matlab GUI Builder a graphical user interface (GUI) was created to start
and stop the measurements and to present the results.
4.1 Performance of the GUI
A TCP/IP connection between the PC and the DSP can be established by the GUI. As
soon as a connection has been established the DSP starts calculating the Beamforming
Algorithm. After each cycle of the algorithm, the DSP sends the calculated data to
the PC/GUI. This data is processed by the GUI. The DSP stops the Beamforming
Algorithm as soon as the link is being disconnected.
4.2 GUI structure
The GUI consists oft two buttons, two windows and two text boxes.
Figure 32: GUI
The two buttons are for controlling the program. By pressing the start button, the
PC connects to the DSP. By pressing the stop button the connection to the DSP will
be disconnected. The DSP IP Address and the port that is going to be used can be
entered into the two text fields. In the Axes1 window the grayscale camera image is
45
overlaid with a color image as received data. In the windows Axes2 the received data is
presented as a surf plot.
46
4.4 Include the Camera to Matlab
The Acoustic Camera consists of a Microphone and a Camera that is directed to the
scanning area. The images from the camera are superimposed by the calculated data
from the Beamforming Algorithm. Thus, the user can see where the calculated noise
is coming from. This project uses a USB Webcam from the company Logitec. The
Webcam was connected directly to the PC. The following command must be used to
integrate the Webcam to Matlab: By entering the command ”imaqhwinfo(”operating
system“)” it can be seen which cameras were recognized by Matlab.
Listing 7: Matlab: imaqhwinfo
1 if ispc
2 info = imaqhwinfo(’winvideo ’)
3 else ismac
4 info = imaqhwinfo(’macvideo ’)
5 end
All relevant data of the camera are listed in the struct ”info”. Important for the next
step is the DeviceID of the camera.
Figure 34: Device info
With the command ”video input” Matlab creates a videostream. This command is
different in each operating system.
Listing 8: Matlab: create videostream
1 if ispc
2 vid = videoinput(’winvideo ’ ,1);
3 else ismac
4 vid = videoinput(’macvideo ’ ,1);
48
5 end
Set the videostream to grayscale:
Listing 9: Matlab: videostream to grayscale
1 set(vid ,’ReturnedColorSpace ’,’grayscale ’);
To start the Videostream use the command start(”videoobject“):
Listing 10: Matlab: start videostream
1 start(vid)
The command ”getsnapshot” create a image from the videostream:
Listing 11: Matlab: getsnapshot
1 frame = getsnapshot(vid);
4.5 TCP/IP connection
The IP Address and the port can be taken from the textbox in order to start the TCP/IP
connection between PC and DSP.
Listing 12: Matlab: IP Adress and port
1 ip = get(handles.IP_Adresse ,’String ’)
2 port = str2num(get(handles.Port ,’String ’))
The command ”tcpip(IP-Adresse,Port,’NetworkRole’,’Server/Client’)” creates a TCP/IP
device.
Listing 13: Matlab: get IP adress and port
1 t = tcpip(ip,port ,’NetworkRole ’,’Client ’);
It is recommended to first close the connection between Matlab and DSP before re-
connecting them again to prevent errors.
Listing 14: Matlab: close and open TCP/IP connection
1 fclose(t)
2 fopen(t)
49
4.6 Convert data
With the Beamforming Algorithm the DSP calculates a (m,n) matrix. For sending this
data to the PC the matrix has to be converted to an array. The data must be converted
again within a matrix in order to present the results of the Beamforming Algorithm in
the GUI.
Figure 35: Array to matrix
With the command ”reshape” Matlab can convert an array to a matrix. In variables
”nopx” and ”nopy” are the size from the (m,n) matrix.
Listing 15: Matlab: array to matrix
1 P = reshape(data (9:end),nopx ,nopy);
To overlay the camera image with the result data both need to be in exact the same
size. The camera images size is 320x240 pix, the data size is 16x12 pix. The result data
has to be scaled up to the size of the camera image. Using the command ”imresize”
Matlab can change the image size.
Listing 16: Matlab: resize image
1 a = imresize(P,20);
4.7 Convert grayscale image to color image
The result data from the Beamforming Algorithm is a (m,n) matrix which corresponds
to a grayscale image. The function ”GUI Farben berechnen” converts the m*n matrix
into a (m,n,3) matrix (color image).
The function ”Farben berechnen” will create an empty matrix for the color image:
50
Listing 17: Matlab: create matrix
1 [row , col] = size(M);
2 I = ones(row ,col ,3);
The grayscale image (data from the DSP) has values between 0 and 255. The color
image therefore has 256 colors. To create the color image a matrix with 256 colors has
to be generated:
Listing 18: Matlab: create colors
1 c = [255 ,255 ,255; 255 ,255 ,252; ... 0,0,0];
To plot the color image into a window the colors need values between 0 and 1:
Listing 19: Matlab: chance value range
1 c = c./255;
This loop convert the grayscale image to a color image:
Listing 20: Matlab: create color image
1 for k = 1:row
2 for l = 1:col
3 I(k,l,1) = c(M(k,l)+1,1);
4 I(k,l,2) = c(M(k,l)+1,2);
5 I(k,l,3) = c(M(k,l)+1,3);
6 end
7 end
4.8 Overlay images
To overlay two images in Matlab these steps need to be followed: Plot the background
image to the window (”frame” is the camera image):
Listing 21: Matlab: overlay images
1 imshow(frame ,’parent ’,handles.axes1);
Hold the plot:
Listing 22: Matlab: overlay images
1 hold(handles.axes1 ,’on’)
The color image ”I” is represented in the window:
51
Listing 23: Matlab: overlay images
1 h = imshow(I,’parent ’,handles.axes1);
Create a matrix with the size from the images with the value 0.5 for the opacity in the
next step:
Listing 24: Matlab: overlay images
1 matrix = 0.5* ones(size(frame ,1),size(frame ,2));
Set options for the plot ”h”:
Listing 25: Matlab: overlay images
1 set(h,’AlphaData ’,matrix);
Hold off the plot:
Listing 26: Matlab: overlay images
1 hold(handles.axes1 ,’off’)
The result of this codelines is:
Figure 36: Overlay images
52
5 Conclusion
We reached our goal of implementing an Acoustic Camera based on a time domain
beamforming algorithm in a digital signal processor.
Further performance improvements would be possible by using a higher sample rate, by
upsamling or by changing the microphone arrangement.
53
6 Annex
6.1 Inventor Drawings of the Assembly
1
1
2
2
3
3
4
4
5
5
6
6
7
7
8
8
A A
B B
C C
D D
E E
F F
1 A2
Part1State Changes Date Name
Drawn
Checked
Standard
Date Name15.12.2014 Leo
10,00
30
,00
160,00
160,00
3,00
60,00
Figure 37: View of the front panel
54
1
1
2
2
3
3
4
4
A A
B B
C C
D D
SHEET 1 OF 1
DRAWN
CHECKED
QA
MFG
APPROVED
Leo 27.01.2015
DWG NO
Querplatte
TITLE
SIZE
C
SCALE
REV
48,00
83,00
12,50 12,50
Figure 38: View of the side panel
55
1
1
2
2
3
3
4
4
5
5
6
6
7
7
8
8
A A
B B
C C
D D
E E
F F
1 A2
State Changes Date Name
Drawn
Checked
Standard
Date Name31.03.2015 Leo
Figure 39: View of the assembled frame
56
References
[1] Algorithm Collections for Digital Signal Processing Applications using Matlab. E.S.
Gopi. Springer. 2007.
[2] Arraytechnologie und Beamforming. HEAD Acousics. https://www.head-acoustics.de/downloads/de/head-
visor/Array Beamforming 07 09d.pdf, Apr. 2015.
[3] C. Studer B. Zimmermann. “FPGA-based Real-Time Acoustic Camera Proto-
type”. In: ().
[4] C Algorithms for RealTime DSP. Paul M. Embree. Prentice Hall PTR.
[5] DMOS 500mA Low-Dropout Regulator. SBVS010D. Texas Instruments. 2005.
[6] EMY-9765P. EKULIT. 2015.
[7] http://processors.wiki.ti.com/index.php/StarterWare. Texas Instruments. 2015.
[8] M. Kern. “Ein Beitrag zur Erweiterung von Beamforming-Methoden”. PhD thesis.
TU Berlin, 2008.
[9] Low-Power, 8-Channel, 24-Bit Analog Front-End for Biopotential Measurements.
SBAS459J. Texas Instruments. 2014.
[10] U. Neugebauer. “Entwicklung einer Akustischen Kamera zur Analyse der Gerausch-
abstrahlung von Antriebsstrangkomponenten”. MA thesis. Technische Universitat
Graz, 2011.
[11] OMAP-L138 DSP+ARM Processor Technical Reference Manual. SPRUH77A. Texas
Instruments. 2011.
[12] TL97x Output Rail-To-Rail Very-Low-Noise Operational Amplifiers. SLOS467H.
Texas Instruments. 2015.
[13] TMS320C674x/OMAP-L1x Processor Serial Peripheral Interface (SPI) User’s Guide.
SPRUFM4I. Texas Instruments. 2011.
57
Recommended