19
Abstract Real-world accident videos represent a promising data source to investigate pedestrian pre-collision behaviour, significantly affecting in-crash kinematics. However, this data source presents challenges in quantifying behaviours and providing pose descriptions revealing joint locations and angles. This study investigated this issue and introduces a method for quantitative evaluation by extracting 3D poses from real- world accidents. The method combines common computer vision approaches and optimisation techniques to align a 3D human model to 2D pose information, extracted from the videos. The capabilities of the method were assessed by applying it to a dataset, holding measured ground truth data. To further demonstrate the method’s capabilities on real-world accident videos, a dataset was created from publicly available sources. Three videos were selected from the dataset, showing typical pedestrian pre-collision reactions, such as raising arms or leaning back, before the introduced method was applied to reconstruct the 3D joint positions and angles for multiple video frames prior to impact. The results emphasise that accident videos can be used to obtain quantitative pedestrian postures and moving patterns, required to derive realistic boundary conditions for pedestrian simulations. Keywords Accident video analysis, pedestrian pre-collision behaviour, 3D posture reconstruction I. INTRODUCTION In the year 2016 more than every fifth killed European road user was a pedestrian [1]. In urban areas, the percentage of pedestrian fatalities accounts for 40%. While other groups are often involved in isolated accidents, this group depends more than the others on the partner protection, provided by other road users. Hence, active and passive pedestrian protection systems have gained evermore importance and have become a fixed part of consumer rating programmes, such as European New Car Assessment Programme (Euro NCAP) [2–3] and legislation. Active safety systems, such as autonomous emergency braking or evasive steering, are designed to avoid accidents or at least reduce the relative velocity between the car and the pedestrian. However, even the best estimates for the effectiveness of active safety systems show that not all accidents will be avoidable [4–5]. Therefore, it is important to continuously develop passive and integrated safety systems to address the remaining unavoidable pedestrian accidents. For the development and assessment of passive pedestrian protection, Human Body Models (HBMs) have gained increasing importance in the last few years [6]. Besides model validity, one major issue involves the determination of realistic boundary conditions. The pedestrian impact kinematics are significantly influenced by body posture [7–8], initial body orientation, relative position to the vehicle front [9–10] and the vehicle velocity [11]. Many studies, as well as the Euro NCAP protocol, position pedestrian HBMs in a standard walking posture [7–8] [12]. In contrast, real world observations [13] and volunteer tests [14], investigating pedestrian behaviour in the pre-collision phase conclude that pedestrians typically display certain reactions, such as raising arms, jumping, turning to or away from the vehicle and leaning back or freezing. Such actions are highly affected by pedestrians’ perception of the approaching danger, the time frame in which to take action and other intrinsic factors such as age or physical conditions [14]. Current approaches to determine pedestrian pre-collision behaviour [13–14] have their advantages as well as disadvantages. Volunteer tests under laboratory conditions have been performed in recent years to record 3D motions of volunteers exposed to virtual pedestrian collision scenarios [14]. However, it is unclear how realistically volunteers behave in a laboratory environment since they are aware of an emerging accident, which is not the case in general real-world situations. The analysis of accident videos from real-world scenarios, either from a car-centred view or traffic surveillance cameras, are therefore of additional value and have been used to M. Schachner ([email protected], +43-316-873-30362) is a PhD Student, B. Schneider researcher, C. Klug is Assistant Professor and W. Sinz is Associate Professor at the Vehicle Safety Institute of Graz University of Technology in Austria. Extracting Quantitative Descriptions of Pedestrian Pre-crash Postures from Real-world Accident Videos Martin Schachner, Bernd Schneider, Wolfgang Sinz, Corina Klug 1 IRC-20-37 IRCOBI conference 2020 231

IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

Abstract Real-world accident videos represent a promising data source to investigate pedestrian pre-collision behaviour, significantly affecting in-crash kinematics. However, this data source presents challenges in quantifying behaviours and providing pose descriptions revealing joint locations and angles. This study investigated this issue and introduces a method for quantitative evaluation by extracting 3D poses from real-world accidents. The method combines common computer vision approaches and optimisation techniques to align a 3D human model to 2D pose information, extracted from the videos. The capabilities of the method were assessed by applying it to a dataset, holding measured ground truth data. To further demonstrate the method’s capabilities on real-world accident videos, a dataset was created from publicly available sources. Three videos were selected from the dataset, showing typical pedestrian pre-collision reactions, such as raising arms or leaning back, before the introduced method was applied to reconstruct the 3D joint positions and angles for multiple video frames prior to impact. The results emphasise that accident videos can be used to obtain quantitative pedestrian postures and moving patterns, required to derive realistic boundary conditions for pedestrian simulations. Keywords Accident video analysis, pedestrian pre-collision behaviour, 3D posture reconstruction

I. INTRODUCTION

In the year 2016 more than every fifth killed European road user was a pedestrian [1]. In urban areas, the percentage of pedestrian fatalities accounts for 40%. While other groups are often involved in isolated accidents, this group depends more than the others on the partner protection, provided by other road users. Hence, active and passive pedestrian protection systems have gained evermore importance and have become a fixed part of consumer rating programmes, such as European New Car Assessment Programme (Euro NCAP) [2–3] and legislation. Active safety systems, such as autonomous emergency braking or evasive steering, are designed to avoid accidents or at least reduce the relative velocity between the car and the pedestrian. However, even the best estimates for the effectiveness of active safety systems show that not all accidents will be avoidable [4–5]. Therefore, it is important to continuously develop passive and integrated safety systems to address the remaining unavoidable pedestrian accidents.

For the development and assessment of passive pedestrian protection, Human Body Models (HBMs) have gained increasing importance in the last few years [6]. Besides model validity, one major issue involves the determination of realistic boundary conditions. The pedestrian impact kinematics are significantly influenced by body posture [7–8], initial body orientation, relative position to the vehicle front [9–10] and the vehicle velocity [11]. Many studies, as well as the Euro NCAP protocol, position pedestrian HBMs in a standard walking posture [7–8] [12]. In contrast, real world observations [13] and volunteer tests [14], investigating pedestrian behaviour in the pre-collision phase conclude that pedestrians typically display certain reactions, such as raising arms, jumping, turning to or away from the vehicle and leaning back or freezing. Such actions are highly affected by pedestrians’ perception of the approaching danger, the time frame in which to take action and other intrinsic factors such as age or physical conditions [14].

Current approaches to determine pedestrian pre-collision behaviour [13–14] have their advantages as well as disadvantages. Volunteer tests under laboratory conditions have been performed in recent years to record 3D motions of volunteers exposed to virtual pedestrian collision scenarios [14]. However, it is unclear how realistically volunteers behave in a laboratory environment since they are aware of an emerging accident, which is not the case in general real-world situations. The analysis of accident videos from real-world scenarios, either from a car-centred view or traffic surveillance cameras, are therefore of additional value and have been used to

M. Schachner ([email protected], +43-316-873-30362) is a PhD Student, B. Schneider researcher, C. Klug is Assistant Professor and W. Sinz is Associate Professor at the Vehicle Safety Institute of Graz University of Technology in Austria.

Extracting Quantitative Descriptions of Pedestrian Pre-crash Postures from Real-world Accident Videos

Martin Schachner, Bernd Schneider, Wolfgang Sinz, Corina Klug1

IRC-20-37 IRCOBI conference 2020

231

Page 2: IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however only been evaluated in a qualitative manner [13] by visual inspection by humans, followed by classification of the observed pre-collision reactions. Although the above mentioned studies have facilitated in improving the general understanding of pedestrian pre-crash movements, to the best knowledge of the authors, quantitative joint angles and temporal information with regard to the course of events have to date not been extracted.

In recent years, computer vision and semantic interpretation of images and videos have been boosted by the use of machine learning techniques [18–19]. New algorithms allow the detection of objects [20] and tracking them further over multiple video frames [21]. Camera-based pedestrian detection and tracking is often used for environmental perception within active safety systems [22]. Besides pedestrian detection and tracking, estimating a pedestrian pose is a further challenge [23–26], amongst other methods suitable for predicting pedestrian movement intentions [22]. Estimating human body postures can be split into two different approaches, 2D and 3D. 2D is the first attempt to estimate the location of major body joints, denoted as key points, within an image. 3D human pose estimation on the other hand, aims to reconstruct 3D coordinates of the established key points.

In this study, it has been investigated if 3D pedestrian body postures can be reconstructed from publicly available accident videos.

II. METHODS

For this investigation, a database of pedestrian accident videos was established, the development of which is discussed in the first part of this section. Secondly, the method used to extract quantitative pedestrian descriptions in the pre-collision phase is also introduced.

A. Accident Video Dataset Different sources were considered for the study. Videos of previous investigations [13][17][27] have been

provided by the authors and included in this study. Furthermore, public videos from sharing platforms such as YouTube and Vimeo have also been collected. The clips were searched using keywords such as pedestrian accident, pedestrian crash, pedestrian collision and similar. Most results included compilations of various pedestrian accidents, in which meta-information such as the accident location was missing, although some revealed recording date and time. To create an accident video dataset, the compilations were cut such that only frames of the pre-collision phase for each accident scenario were stored.

The quality of the accident videos varied significantly in quality and the viewing points from which they were recorded. The videos included sequences showing a fully occluded pedestrian recorded at night time, and videos in which the entire sequence of events, could be seen in high spatial and temporal resolution.

The most important quality aspect for the reconstruction was whether or not the pedestrian was occluded in the pre-collision phase. Videos showing fully occluded pedestrians in the pre-collision phase were therefore considered inappropriate for extraction of quantitative pedestrian descriptions. In order to rate the extracted videos with respect to their quality, a decision procedure was used. This procedure should provide further notion of how many of the publicly available videos would be suitable for reconstruction with the introduced method in this study. The rating procedure is shown in Fig. 1 and incorporates the quality measures occlusion (fully/partly/none), light conditions (bad/good), spatial resolution (high/low) of the pedestrian and temporal resolution (high/low) of the video itself. The rating cascades the quality measures similar to a decision tree. Videos showing fully occluded pedestrians were rated 5, if the pedestrian was partly occluded, light conditions were good but the spatial resolution was low, the rating would be 4, etc.

Hence, only videos showing non-occluded pedestrians, in good light conditions, high spatial and temporal resolution are rated 1. In Table A-I exemplary video frames for each category are shown. The rating results were further used to select videos for which quantitative pose descriptions should be extracted.

IRC-20-37 IRCOBI conference 2020

232

Page 3: IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

Fig. 1. Video rating schema cascading the quality measures occlusion, light conditions, spatial and temporal

resolution. The classification ranges from 1 to 5, where 1 indicates good and 5 poor quality. The colour code represents classification for the observed quality criteria in the video.

Further, it was observed, that accidents were recorded from different viewing points, either stationary (from surveillance cameras) or car-centred (often via dashboard cameras). Car-centred viewing points were further categorised into non-moving and moving viewing points. From moving, car-centred views, the collision either happed with the ego or another vehicle. The categorisation into different viewing points reveals which kind of viewing points are most common. Besides the quality classification, viewing points were also considered for selecting videos for which quantitative pose descriptions should be extracted.

B. Toolchain Depth reconstruction from monocular images, as observed in the accident video dataset, is a challenging and

even ambiguous task [28]. Common approaches to perform 3D human pose reconstructions of monocular images use pixel coordinates of representative 2D human body joints, i.e., knee and elbow, which are commonly denoted as key points. A bottom-up approach was applied to estimate 3D human body postures from key points by using deep neural networks, which were trained on large datasets shown in [29]. On the other hand, a top-down approach has also been applied, in which the body postures of a 3D human model [30] had been optimised such that its 2D camera projected body joints fit the 2D key points in the image [31].

In this study, state of the art computer vision algorithms have been used, extended and combined to a toolchain. The toolchain consists of a semi-automatic 2D key point detection procedure and 3D pedestrian pose reconstruction, based on the investigations in [31]. These two parts combine to form the outline of this section.

1) 2D Human Pose Estimation

For the localisation of 2D key points, deep convolutional networks have proven themselves in recent years. In contrast to many top-down approaches, where explicit models have been fitted with appropriate preliminary information [26], deep neural networks are often better suited to compensate diverse image data (low spatial resolution, occlusion, extreme joint positions or poor lighting conditions). However, one of the major drawbacks of this methodology is the implicit mapping, which does not allow further insight into the trained model.

The performance of such deep convolutional neural networks is usually evaluated on large diverse datasets. Meaning that the algorithm is trained with a subset of the dataset and its performance further validated and tested with a disjoint subset. Therefore, representative human joints depend on the representation in the dataset, reaching from limbs only [23][25]to representation of the hands [32], face or feet [24]. Datasets of this kind are mainly annotated by humans, meaning that key points are estimated and cross validated by annotating people [25][33]. Hence 2D key point annotations in datasets must be regarded as a best possible estimate and do not fully guarantee anatomic correct point locations, which might be measurable under laboratory conditions.

In order to extract body key points from images, the pre-trained openPose model published developed in [24] was used, which is capable of predicting key points of multiple persons in an image. The detected key points combine the common Microsoft Common Objects in Context (MS-COCO) key points [33] and key points from a foot dataset. Hence the body pose is represented by 25 key points as shown in Fig. 2. The result of an automatic detection by the openPose framework is exemplary outlined in Fig. 3. Although the model has been trained on a large and diverse dataset, it was not capable of detecting all key points in a reliable manner. To cope with this issue, an additional graphical user interface (GUI) for manual adjustment of the detected key points was implemented, shown in Fig. 4.

Temporal Resolution

Spatial Resolution

Light Condition

Occlusion

Video

Fully Partly

Bad Good

Low High

Low High

None

Good

High

High Low

Low

Bad54321

IRC-20-37 IRCOBI conference 2020

233

Page 4: IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

2) 3D Human Pose Estimation Reference [31] has shown that 3D pose information can be generated by projecting the Skinned Multi-Person

Linear model (SMPL) [30] onto corresponding 2D key points. SMPL consists of 24 joints and 6,890 vertices and incorporates a large dataset of 3D human body scans, such that different body shapes can be represented. The different body shapes are modelled by a linear function with coefficients 𝛽𝛽. The posture is determined by the 72 parameters �⃗�𝜃, three for each of the 24 joints.

Fig. 2. Represenation of the 2D body pose with the 25 keypoints based on [24][33].

Fig. 3. Result of the auto-mated key point anno-tation by the openPose framework [24].

Fig. 4. Implemented GUI for indivdiual key point adjustment. The green point is the selected one, which can be adjusted via drag and drop.

Therefore, the model maps the shape coefficients 𝛽𝛽 and the pose parameters �⃗�𝜃 to the coordinates of the vertices. An optimisation was performed in [31] to minimise a loss function consisting of five error terms with corresponding scalar weights 𝜆𝜆. Using their notation, the loss function can be written as:

𝐸𝐸�𝛽𝛽, �⃗�𝜃� = 𝐸𝐸𝐽𝐽�𝛽𝛽, �⃗�𝜃;𝐾𝐾��⃗ , 𝐽𝐽est� + 𝜆𝜆𝜃𝜃𝐸𝐸𝜃𝜃��⃗�𝜃� + 𝜆𝜆𝑎𝑎𝐸𝐸𝑎𝑎��⃗�𝜃� + 𝜆𝜆sp𝐸𝐸sp��⃗�𝜃,𝛽𝛽� + 𝜆𝜆𝛽𝛽𝐸𝐸𝛽𝛽�𝛽𝛽� (1)

The first term 𝐸𝐸𝐽𝐽�𝛽𝛽, �⃗�𝜃;𝐾𝐾��⃗ , 𝐽𝐽est� addresses the correspondence of the 2D and 3D joint positions. The 3D joint

positions of the SMPL model have been projected to 2D with a perspective camera model with parameters 𝐾𝐾��⃗ . The weighted distances between these projected 2D points and the estimated 2D points from the images 𝐽𝐽𝑒𝑒𝑒𝑒𝑒𝑒 have been summed up over the number of joints. Fig. 6 shows the mapping between the annotated 2D key points and the points of the SMPL model. For the mapping, joints as well as points on the mesh were used. The second error term 𝐸𝐸𝜃𝜃��⃗�𝜃� is a pose prior that estimates the probability of a posture, which was trained on the CMU dataset [34]. The third error term 𝐸𝐸𝑎𝑎��⃗�𝜃� penalises unnatural bending of joints such as knees and elbows. The fourth error term 𝐸𝐸sp��⃗�𝜃,𝛽𝛽� penalises interpenetrations. This was done by approximating the body shape with capsules. The last error term 𝐸𝐸𝛽𝛽�𝛽𝛽� is a shape prior that penalises deviation from the mean shape. An extension of the described method can be found in [35], which includes hand pose and facial expression.

The goal in this study was to extract quantitative pedestrian descriptions in the pre-collision phase and therefore multiple frames prior to the impact have been considered. When applying the method described above for single images, the predicted shapes can vary considerably from image to image even though the person is the same. In addition, information from the sequence of images has not been used. Both issues represent major drawbacks and therefore, the method was extended by three aspects for this study. First, the shape coefficients 𝛽𝛽 were optimised globally over all images. Secondly, temporal information of all frames have been included in the optimisation by introducing the error term 𝐸𝐸fr�𝛽𝛽, �⃗�𝜃�, which sums up the distances of joints across adjacent frames. In contrast to the other error terms, there was only one value of this error term for all the frames:

left elbow

left wrist

left shoulder

neckright shoulder

right elbow

right wrist

lower back

left hipright hip

right knee left knee

nose

left ear

left eyeright eye

right ear

left ankleleft small toeright small toe

right ankleleft heelright heel

right big toe left big toe

IRC-20-37 IRCOBI conference 2020

234

Page 5: IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

𝐸𝐸fr�𝛽𝛽, �⃗�𝜃� = � � ||𝐽𝐽𝑘𝑘𝑖𝑖 − 𝐽𝐽𝑘𝑘𝑖𝑖+1||2joint 𝑘𝑘frame 𝑖𝑖

(2)

Finally the interpenetration term 𝐸𝐸𝑒𝑒𝑠𝑠��⃗�𝜃,𝛽𝛽� was replaced by a term based on the algorithm in [36]. The

algorithm uses rays to detect self-intersection of outer surfaces of the mesh. Let 𝑙𝑙spout be the length of a detection ray in the self-intersection region for penetration of the outer surface. Then an updated error term can be defined as following:

𝐸𝐸�sp(𝑉𝑉) = 𝐸𝐸�sp�𝛽𝛽, �⃗�𝜃� = � 𝑙𝑙spoutrays

(3)

where 𝑙𝑙spout is zero for detection rays with no outer self-intersection region. To summarise, the updated loss

function can be written with the introduced error terms 𝐸𝐸�sp��⃗�𝜃,𝛽𝛽� and 𝐸𝐸fr�𝛽𝛽, �⃗�𝜃� as: 𝐸𝐸�𝛽𝛽, �⃗�𝜃� = 𝐸𝐸𝐽𝐽�𝛽𝛽, �⃗�𝜃;𝐾𝐾��⃗ , 𝐽𝐽est� + 𝜆𝜆𝜃𝜃𝐸𝐸𝜃𝜃��⃗�𝜃� + 𝜆𝜆𝑎𝑎𝐸𝐸𝑎𝑎��⃗�𝜃� + 𝜆𝜆𝑒𝑒𝑠𝑠𝐸𝐸�𝑒𝑒𝑠𝑠��⃗�𝜃,𝛽𝛽� + 𝜆𝜆𝛽𝛽𝐸𝐸𝛽𝛽�𝛽𝛽� + 𝜆𝜆fr𝐸𝐸fr�𝛽𝛽, �⃗�𝜃� (4)

The weighting factors 𝜆𝜆 of the error term 𝐸𝐸�𝛽𝛽, �⃗�𝜃� have a significant impact on the quality of the 3D pose

reconstruction. Higher values for 𝜆𝜆𝜃𝜃 restrict the optimisation to adopt learned postures. Lower values lead to a higher influence of the 2D, 3D joint projection, which can however lead to unrealistic body postures in many cases. To overcome this issue, the 𝜆𝜆𝜃𝜃 values have been decreased for the last iteration steps, which fine tunes the body posture at the end. A similar approach has also been used for the weighting factor 𝜆𝜆𝑆𝑆𝑆𝑆, whereby it has been increased at the end of the optimisation, which prevents the reconstruction from self-intersections. The other weighting factors have been kept constant for the entire optimisation.

From the optimised shape and pose coefficients 𝛽𝛽, �⃗�𝜃, the 3D joint position of the SMPL skeleton, as shown in Fig. 5, can be reconstructed. The 3D joint positions are further denoted as 𝐽𝐽pose. Additionally, joint angles of the hips (𝐽𝐽1, 𝐽𝐽2), ankles (𝐽𝐽7, 𝐽𝐽8), shoulders (𝐽𝐽16, 𝐽𝐽17) and the wrists (𝐽𝐽20, 𝐽𝐽21) were calculated according to recommendation by the International Society of Biomechanics (ISB) [37–38]. Since SMPL joints do not represent the human skeleton generally, the joint coordinate systems were defined such that they best approximate the recommendations. A description of how the coordinate system’s axis are defined, can be obtained from TABLE I.

TABLE I

DEFINITION OF THE COORDINATE SYSTEM’S X-, Y- AND Z-AXIS IN THE HIP, ANKLE, SHOULDER AND WRIST JOINTS ACCORDING TO THE ISB RECOMMENDATION [37–38]. THE Y-AXES ARE DEFINED THROUGH JOINT CONNECTIONS, Z AND X-AXES

x-axis (left) y-axis (left) z-axis (left) x-axis (right) y-axis (right) z-axis (right)

Hip 𝐽𝐽4𝐽𝐽1�������⃗ × 𝐽𝐽1𝐽𝐽2�������⃗ 𝐽𝐽4𝐽𝐽1�������⃗ 𝐽𝐽5𝐽𝐽2�������⃗ × 𝐽𝐽2𝐽𝐽1�������⃗ 𝐽𝐽5𝐽𝐽2�������⃗

Ankle 𝐽𝐽7𝐽𝐽4�������⃗ 𝐽𝐽4𝐽𝐽7�������⃗ × 𝐽𝐽7𝐽𝐽10���������⃗ 𝐽𝐽8𝐽𝐽5�������⃗ 𝐽𝐽11𝐽𝐽8���������⃗ × 𝐽𝐽8𝐽𝐽5�������⃗ Shoulder 𝐽𝐽18𝐽𝐽16�����������⃗ × 𝐽𝐽16𝐽𝐽13�����������⃗ 𝐽𝐽18𝐽𝐽16�����������⃗ 𝐽𝐽19𝐽𝐽17�����������⃗ × 𝐽𝐽17𝐽𝐽14�����������⃗ 𝐽𝐽19𝐽𝐽17�����������⃗

Wrist 𝐽𝐽20𝐽𝐽18�����������⃗ 𝐽𝐽22𝐽𝐽20�����������⃗ × 𝐽𝐽20𝐽𝐽18�����������⃗ 𝐽𝐽21𝐽𝐽19�����������⃗ 𝐽𝐽23𝐽𝐽21�����������⃗ × 𝐽𝐽21𝐽𝐽19�����������⃗ 3) Determination of Weighting Factors 𝝀𝝀

The optimisation weights λ cannot be determined from the considered accident videos, since ground truth information is missing. To overcome this issue, the h36m [39] dataset was used. The dataset contains videos with measured 3D ground truth information of actors performing common activities, such as walking, talking on the phone, taking photos and similar. Hence, reconstructed postures can be compared with the ground truth joints of the h36m dataset and the impact of weighting factors 𝜆𝜆 can be evaluated. In this study, samples of a walking actress were used, since this is a commonly observed pedestrian pre-collision action as described in [13–14]. Frames 51 and 101 are shown in Fig. 7 together with the measured 3D joints.

In comparison to the SMPL model, the h36m skeleton is represented differently and only a subset of joints can be compared directly, which excludes key points of the head and feet. For comparison, the mean per-joint position error (MPJPE) is commonly used [29]. Despite its capabilities to estimate the actual deviation from the ground truth data, the MPJPE can be misleading in some cases, resulting in low errors for incomparable poses. To

IRC-20-37 IRCOBI conference 2020

235

Page 6: IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

overcome this issue and to further evaluate the method’s capabilities of reconstructing pedestrian kinematics, knee and elbow joint angles have been compared for consecutive frames. Therefore, a sequence of two seconds (frames 51 to 151) of the walking actress was used. The data was sub-sampled to a sampling rate of 12.5 Hz, resulting in 26 frames, the angles were calculated between the respective joint connections. In order to exclude the influence of ambiguous key point annotations, the camera projected 2D key points were used as input for the 3D reconstruction.

Fig. 5. The SMPL model from the front and side view. Each joint has been defined by a unique ID. The coordinate system for the ankle, hip, shoulder and wrist were defined to mimic the ISB recommendations to report kinematic data [37–38]. The angles of the local coordinate systems to the global coordinate system have been used to describe the body postures quantitatively.

Fig. 6. Corresponding points within the SMPL model and the annoted 2D key points. The keypoints highlighted in red are part of the the 2D mesh SMPL skeleton. Points in green are located on the mesh. Black key points are not considered for the optimisation.

Fig. 7. Frame 51 and 101 of the h36m dataset together with the measured 3D joints.

III. RESULTS

In total, 470 videos were visually analysed and further classified with respect to video quality and viewing points. In this study, pedestrian pre-collision body postures were reconstructed from three high quality videos shot from different viewing points. The optimisation weights for the reconstruction were elaborated by applying the method on samples of the h36m dataset. 1) Accident Video Dataset Analysis

For this investigation, 470 videos were visually inspected and classified. Eleven videos were rated with 1; 27 with 2; 49 with 3; 197 with 4; and 186 with 5. Videos of Category 4 and 5 exhibit full occlusion and/or bad light conditions, such that even a visual interpretation by humans is difficult. Videos of these categories were not

J1

J0

J2

J4

J7

J10J11

J8

J5

J3

J6J9

J13J14J17 J16

J18

J20

J22

J21

J23

J19

J12

J15

IRC-20-37 IRCOBI conference 2020

236

Page 7: IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

considered applicable for the 3D reconstruction and have been further excluded from this investigation. The remaining 90 videos were subcategorised according to different viewing points. The results showed that 49 collisions were recorded from moving vehicles. In 19 of those videos, the collision happened with the ego vehicle, for the remaining 30 samples, other cars have been involved in the accident. In comparison, 41 collisions have been recorded from a stationary viewing point. Twenty-two of them from a car centred view and 19 with common surveillance cameras. For this category, the collision happened with surrounding traffic.

The distribution of the different viewing points was considered, for the selection of the videos in this study. Therefore, two videos from a moving vehicle (one colliding with the ego vehicle and one from an observing perspective) and one from a static viewing point were chosen. Relying on the investigation of typical pedestrian reactions [13–14, 17][17], videos were selected in which the pedestrians showed avoiding reactions, i.e., raising arms, leaning backwards or fear reactions, such as freezing. Representative frames of the selected videos are shown in Fig. 8, Fig. 9 and Fig. 10.

Fig. 8. Video 1: Car-centred view and the collision with the ego vehicle. The pedestrian shows extensive arm support.

Fig. 9. Video 2: Car-centred view and collision with a vehicle of the surrounding traffic. The pedestrian shows a leaned back pose and additional arm support.

Fig. 10. Video 3: Static view. The pedestrian freezes and shows additional arm support.

2) Determined Weighting Factors 𝜆𝜆 In order to estimate the method’s capabilities and to determine well performing optimisation weights 𝜆𝜆, the 3D reconstruction was performed on consecutive frames of the h36m dataset. The parameter configurations, which were tested, are summarised in Table II. Fig. 11 outlines the MPJPE over multiple frames. The results show that the error is around 50 mm at the minimum and 130 mm at the maximum. The parameter Configuration 1 had the lowest mean MPJPE. Fig. 12 shows a comparison of the ground truth joint angles with the results obtained by using the different optimisation configurations. The results show that 𝜆𝜆𝑓𝑓𝑓𝑓 smooths the reconstructed postures over multiple frames. Further it can be stated that a constant high value for the pose prior 𝜆𝜆𝜃𝜃 leads to lower knee bending in comparison to the ground truth data. Although low values of 𝜆𝜆𝜃𝜃 lead to better results for the knee angles, a disadvantage is that they are less robust for uncertain key point annotations, and can thus lead to unrealistic body postures. The results therefore emphasise the fine-tuning approach, by lowering the pose prior from an initial value of 100 to 9 at the optimisation end.

TABLE II DIFFERENT PARAMETER CONFIGURATIONS, USED TO RECONSTRUCT THE SELECTED H36M FRAMES. THE VALUES IN PARENTHESES

INDICATE THE ITERATION STEPS FOR WHICH THE VALUES HAVE BEEN USED 𝜆𝜆𝜃𝜃 𝜆𝜆𝑒𝑒𝑠𝑠 𝜆𝜆𝛽𝛽 𝜆𝜆𝑎𝑎 𝜆𝜆𝑓𝑓𝑓𝑓

Configuration 1 100 (0-900), 9 (900-1000) 0 (0-900), 100 (900-1000) 400 100 100 Configuration 2 100 (0-900), 9 (900-1000) 0 (0-900), 100 (900-1000) 400 100 0 Configuration 3 100 0 (0-900), 100 (900-1000) 400 100 100 Configuration 4 9 0 (0-900), 100 (900-1000) 400 100 100

IRC-20-37 IRCOBI conference 2020

237

Page 8: IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

Fig. 11. Comparison of the MPJPE between the reconstructed poses and the ground truth h36m poses for different parameter configurations.

Fig. 12. Comparison of the reconsturcted joint angles (left/right elbow/knee) with the selected h36m video sequence. The ground truth data is shown in black, the reconstructinos in red.

3) Quantitative body postures of selected videos For the reconstruction, the parameter Configuration 1 was used since it performed well on the h36m ground

truth data. In comparison to Video 1 and Video 2, the female SMPL model was used for the reconstruction of Video 3. Fig. 13 shows the reconstructed body postures for the selected videos prior to impact. The reconstructed postures of the last five frames prior to impact can be obtained from Table A-II, Table A-V and Table A-VIII. The observed joint coordinates 𝐽𝐽pose of these frames are given in Table A-IV, Table A-VII and Table A-X and detailed joint angles in Table A-III, Table A-VI and Table A-IX.

IRC-20-37 IRCOBI conference 2020

238

Page 9: IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

Fig. 13. Reconstructed 3D postures for each video on the left are shown on the right, displaying common pedestrian reactions such as raising arms and or leaning back.

IV. DISCUSSION

A. Accident Video Dataset Similar to the approaches shown by [13][17], most of the considered videos used in the accident dataset were

extracted from video compilations, published on video sharing platforms. Although some videos reveal time and date information, meta information of accidents largely remains unknown. The studies by [13][17] have, however, concluded that their collected video datasets also represent data held on in-depth databases, which could be a further incentive to use videos to draw general conclusions about typical pedestrian pre-crash body postures as well as facilitate quantitative comparison.

The investigation revealed that many currently available videos are not applicable for the reconstruction. This is mainly due to insufficient light conditions, temporal or spatial resolution or because pedestrians are fully or partly occluded. The method should however be applicable if key points can be annotated in a meaningful way.

From today’s perspective, due to enhanced camera performance in terms of cost and quality, and that computer vision algorithms are delivering better results for semantic scene reconstructions, video recordings are increasingly being used in applications [20]. Applications which record pedestrians pre-crash behaviour range captured by dashboard cameras, can be used as evidence in court [40], to monitor road conditions [41] or for environmental perception within active safety systems [22]. On the one hand, this trend may lead to a greater amount of (publicly) available, high-quality accident videos in the near future. On the other hand, it would be reasonable to expect that videos revealing dedicated meta information (such as geolocation) of actual accidents would to a greater extent be used for accident reconstruction.

B. 2D Key Point Detection The 2D key point detection is a prerequisite of the 3D reconstruction. Obviously (partly) occluded accidents,

and recordings in deficient lighting conditions, will never confirm the pedestrian pose. Evidently, a high spatial pedestrian resolution is beneficial for the 2D key point annotation, irrespective if produced automatically or manually. Further, it was observed that the key point annotation is rather difficult to produce if the pedestrian is wearing wide, loosely fitting clothes.

The state of the art key point detector openPose [24] was mainly selected due to its well evaluated performance

IRC-20-37 IRCOBI conference 2020

239

Page 10: IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

on diverse datasets. Applying openPose on multiple accident videos, showed that occluded body parts (often the case for frames which show the pedestrian from the side) were not detected in many cases. openPose works framewise, hence prior information of the key point position is not considered for the estimation of the subsequent frame. However, key point detections in videos might particularly benefit from using additional information such as optical flow [42], or further prior knowledge of the pedestrian. Enhancing the automation of the key point detection was, however, out of the scope for this investigation, such that the estimates were manually improved for the selected videos.

C. 3D Pose Reconstruction The comparison with known ground truth data in this study revealed the method’s sensitivity on the weighting

factors 𝜆𝜆. Finding an appropriate parameter set for the reconstruction is also a matter of how trustworthy the 2D key point annotations are. This is especially important for cases where the 2D key point annotation is ambiguous, due to low spatial resolutions and or loosely fitting clothes. The comparison with ground truth data has shown that the method is able to extract movement patterns, like human gait and according arm support. The comparison with ground truth data suggests that the reconstruction works better when the person is filmed from a side view. The reconstructed angles match quite well in the temporal range, between 0.75 and 1.25 seconds. The reconstruction of 3D information from 2D images, however, remains ambiguous [28], and cannot be completely eliminated by the introduced approach. For a further error estimation, influence factors, similarly camera position relative to the pedestrian and the accuracy of the 2D key point annotation, would also have to be investigated. For this purpose, further samples of the h36m data could be used, for which reconstructed results can be compared with the measured ground truth data. A comparison between ground truth data showing typical pre-crash reactions would be highly valuable to evaluate the methods accuracy.

In comparison to an HBM, the SMPL model [30] does not represent an anatomical correct human body and not all joints have a direct counterpart in the human skeleton [37–38]. Although this study assumes that joints of the extremities are comparable to their counterparts in the human skeleton, this assumption would need further investigation.

D. Quantitative Descriptions of Pedestrian Body Postures Pedestrian pre-collision behaviour has a significant impact on the outcome of accidents [7–8]. Previous studies

introduced reactions by analysing real world data in a qualitative manner [13]. Qualitative descriptions give a notion of the course of events, and can be used for statistical classification. The extraction of quantitative descriptions has several further advantages, which might be exploitable in the future.

At first, the extracted body postures can be used to initially position HBMs for in-crash studies more realistically, by using the extracted joint angles of this study. The effect of the different pre-crash postures should be investigated based on the reconstructed postures, as done in [9] on a global level. Based on these results, it may be beneficial to check the robustness of pedestrian safety systems for a variety of realistic initial postures. The findings should also be considered for the reconstruction of pedestrian accidents, i.e., enlarge the catalogue of considered initial postures for fast-running pedestrian simulation models. Further, it might be interesting to determine desirable initial body postures of pedestrians leading to lower injury risks.

The extraction of consecutive frames has the benefit of making the entire course of events reconstructible. It might be advantageous to evaluate different temporal stages within the accident, starting from normal gait behaviour, to the perception of the approaching danger and induced avoiding reaction [13–14][17]. Recent studies have shown that HBMs can be transferred from one posture to another, using feedback control strategies and active muscle models [43]. Temporal pose information might be applicable for this domain to either validate induced model kinematics or to use them as target positions to determine control parameters.

In investigations of further accident videos, it was possible to analyse gender and age specific differences [14], based on body size and pose patterns, e.g., bent posture. Such behavioural reaction patterns might be exploitable within integrated safety systems to predict impact times and locations more accurately and to define appropriate countermeasures earlier. The realistic pedestrian pre-crash reactions might also be helpful for developing realistic scenarios, which can be used for the assessment of active safety systems. Currently, pedestrian behaviour is very simplified in most of the studies, as it is often based on accident reconstruction, where constant speeds and walking postures often have been assumed [4–5]. The incorporation of accurate pre-collision behaviour would therefore enhance the assessment and reveal more accurate results.

IRC-20-37 IRCOBI conference 2020

240

Page 11: IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

V. CONCLUSIONS

This study has shown that a vision-based toolchain can be used to derive quantitative 3D descriptions of pedestrians prior to an accident, based on real-world accident videos. In order to evaluate the capabilities of the introduced approach, the method was applied on selected videos, recorded from static and car-centred viewpoints. By applying the developed method, it was possible to extract important features, such as joint angles from accident videos semi-automatically. The findings suggest that better spatial resolutions of the affected pedestrian and static surveillance without occlusion can further improve the quantitative evaluation of pedestrian movement features in an automated way for future analyses.

VI. ACKNOWLEDGEMENT

This study is part of the i-protect project ADAM, funded by Mercedes Benz AG, Germany. The authors are grateful for the enlightening discussions within the Tech Center i-protect, especially with our colleagues at the Institute for Modelling and Simulation of Biomechanical Systems at the University of Stuttgart in Germany. Furthermore, we would like to thank the Institute of Computer Graphics and Vision (TU Graz) in Austria, for their support and fruitful discussions. Yong Hana from Xiamen University of Technology in China and the VSLab (NTHU) in Taiwan who provided us with accident videos that further supported this study. The authors are further grateful to Kai De Block (TU Graz) who helped us with data preparation and processing.

VII. REFERENCES

[1] European Commission (2018), Annual Accident Report 2018, 2018, Belgium. [2] Euro NCAP. (2019), Test Protocol - AEB VRU systems, 2019. [3] Euro NCAP. (2019), Assessment Protocol - Vulnerable Road User protection, 2019. [4] Gruber M, Kolk H, Klug C, Tomasch E, Feist F, Schneider A et al. (2019) The effect of P-AEB system

parameters on the effectiveness for real world pedestrian accidents. Proceedings of International Technical Conference on the Enhanced Safety of Vehicles, 2019, Eindhoven, Netherlands.

[5] Detwiller M, Gabler HC. (2017) Potential Reduction in Pedestrian Collisions with an Autonomous Vehicle. Proceedings of International Technical Conference on the Enhanced Safety of Vehicles, 2017, Michigan, USA.

[6] Klug C, Feist F, Schneider B, Sinz W, Ellway J, van Ratingen M. (2019) Development of a Certification Procedure for Numerical Pedestrian Models. Proceedings of International Technical Conference on the Enhanced Safety of Vehicles, 2019, Eindhoven, Netherlands.

[7] Klug C, Feist F, Raffler M, Sinz W, Petit P, Ellway J et al. (2017) Development of a Procedure to Compare Kinematics of Human Body Models for Pedestrian Simulations. Proceedings of IRCOBI, 2017, Antwerp, Belgium.

[8] Li G, Yang J, Simms C. (2015) The influence of gait stance on pedestrian lower limb injury risk. Accident Analysis & Prevention, 2015, 85: pp.83–92.

[9] Soni A, Robert T, Beillas P. (2013) Effects of pedestrian pre-crash reactions on crash outcomes during multi-body simulations. Proceedings of IRCOBI Conference, 2013, Gothenburg, Sweden.

[10] Tamura A, Nakahira Y, Iwamoto M, Nagayama K, Matsumoto T. (2008) Effects of pre-impact body orientation on traumatic brain injury in a vehicle-pedestrian collision. International Journal of Vehicle Safety, 2008, 3(4): pp.351–370.

[11] Wood DP, Simms C, Walsh DG. (2005) Vehicle-pedestrian collisions. Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering, 2005, 219(2): pp.183–195.

[12] Untaroiu CD, Meissner MU, Crandall JR, Takahashi Y, Okamoto M, Ito O. (2009) Crash reconstruction of pedestrian accidents using optimization techniques. International Journal of Impact Engineering, 2009, 36(2): pp.210–219.

[13] Han Y, Li Q, He W, Wan F, Wamg B, Mizuno K. (2017) Analysis of Vulnerable Road User Kinematics Before/During/After Vehicle Collisions Based on Video Records. Proceedings of IRCOBI Conference, 2017, Antwerp, Brussels.

[14] Soni A, Robert T, Rongieras F, Beillas P. (2013) Observations on Pedestrian Pre-Crash Reactions during Simulated Accidents. Stapp Car Crash Journal, Vol. 57, 2013, 57.

[15] Barry F, Simms C. (2016) Assessment of Head-Ground Impact Patterns in Real World Pedestrian-Vehicle Collisions. Proceedings of IRCOBI Conference, 2016, Malaga, Spain.

IRC-20-37 IRCOBI conference 2020

241

Page 12: IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

[16] Li Q, Han Y, Mizuno K. (2018) Ground Landing Mechanisms in Vehicle-To-Pedestrian Impacts Based on Accident Video Records. Proceedings of WCX World Congress Experience, 2018.

[17] Han Y, Li Q, Wang F, Wang B, Mizuno K, Zhou Q. (2018) Analysis of pedestrian kinematics and ground impact in traffic accidents using video records. International Journal of Crashworthiness, 2018, 1(2): pp.1–10.

[18] Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D et al. (2015) Going deeper with convolutions. Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, Boston, MA, USA.

[19] Lecun Y, Bottou L, Bengio Y, Haffner P. (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11): pp.2278–2324.

[20] Krizhevsky A, Sutskever I, Hinton GE. (2017) ImageNet classification with deep convolutional neural networks. Communications of the ACM, 2017, 60(6): pp.84–90.

[21] Ma C, Huang J-B, Yang X, Yang M-H. (2015) Hierarchical Convolutional Features for Visual Tracking. Proceedings of 2015 IEEE International Conference on Computer Vision (ICCV), 2015, Santiago, Chile.

[22] Keller CG, Gavrila DM. (2014) Will the Pedestrian Cross? A Study on Pedestrian Path Prediction. IEEE Transactions on Intelligent Transportation Systems, 2014, 15(2): pp.494–506.

[23] Andriluka M, Pishchulin L, Gehler P, Schiele B. (2014) 2D Human Pose Estimation: New Benchmark and State of the Art Analysis. Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, Columbus, OH, USA.

[24] Cao Z, Hidalgo Martinez G, Simon T, Wei S-E, Sheikh YA. (2019) OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019.

[25] Johnson S, Everingham M. (2011) Learning effective human pose estimation from inaccurate annotation. Proceedings of 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011, Colorado Springs, CO, USA.

[26] Yang Y, Ramanan D. (2013) Articulated Human Detection with Flexible Mixtures of Parts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(12): pp.2878–2890.

[27] Chan F-H, Chen Y-T, Xiang Y, Sun M Anticipating Accidents in Dashcam Videos. In: Lai S-H, Lepetit V, Nishino K, Sato Y (eds). Computer vision - ACCV 2016: 13th Asian Conference on Computer Vision, Taipei, Taiwan, November 20-24, 2016, revised selected papers, Cham: Springer2017. pp. 136–153.

[28] Akhter I, Black MJ. (2015) Pose-conditioned joint angle limits for 3D human pose reconstruction. Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, Boston, MA, USA.

[29] Pavllo D, Feichtenhofer C, Grangier D, Auli M. (2019) 3D Human Pose Estimation in Video With Temporal Convolutions and Semi-Supervised Training. Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, Long Beach, CA, USA.

[30] Loper M, Mahmood N, Romero J, Pons-Moll G, Black MJ. (2015) SMPL: a skinned multi-person linear model. ACM Transactions on Graphics, 2015, 34(6): pp.1–16.

[31] Federica Bogo, Angjoo Kanazawa, Christoph Lassner, Peter V. Gehler, Javier Romero, Michael J. Black. (2016) Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image. CoRR, 2016, abs/1607.08128.

[32] Shahroudy A, Liu J, Ng T-T, Wang G. (2016) NTU RGB+D: A Large Scale Dataset for 3D Human Activity Analysis. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, Las Vegas, NV, USA.

[33] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays et al. (2014) Microsoft COCO: Common Objects in Context. CoRR, 2014, abs/1405.0312.

[34] CMU graphics lab motion capture database, http://mocap.cs.cmu.edu. [Accessed 2020.06.05] [35] Pavlakos G, Choutas V, Ghorbani N, Bolkart T, Osman AA, Tzionas D et al. (2019) Expressive Body Capture:

3D Hands, Face, and Body From a Single Image. Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, Long Beach, CA, USA.

[36] Wu Z, Jiang W, Luo H, Cheng L. (2019) A Novel Self-Intersection Penalty Term for Statistical Body Shape Models and Its Applications in 3D Pose Estimation. Applied Sciences, 2019, 9(3): p.400.

[37] Wu G, Siegler S, Allard P, Kirtley C, Leardini A, Rosenbaum D et al. (2002) ISB recommendation on definitions of joint coordinate system of various joints for the reporting of human joint motion—part I: ankle, hip, and spine. Journal of biomechanics, 2002, 35(4): pp.543–548.

IRC-20-37 IRCOBI conference 2020

242

Page 13: IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

[38] Wu G, van der Helm, Frans C.T., Veeger HEJ, Makhsous M, van Roy P, Anglin C et al. (2005) ISB recommendation on definitions of joint coordinate systems of various joints for the reporting of human joint motion—Part II. Journal of biomechanics, 2005, 38(5): pp.981–992.

[39] Ionescu C, Papava D, Olaru V, Sminchisescu C. (2014) Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(7): pp.1325–1339.

[40] Manuel Schweiger, Lena Werderitsch. (2018) Verwertung von Dashcam-Aufnahmen im Zivilprozess. Zivilrecht aktuell (Zak), 2018, 10/2018: pp.187–190.

[41] Jokela M, Kutila M, Le L. (2009) Road condition monitoring system based on a stereo camera. Proceedings of 2009 IEEE 5th International Conference on Intelligent Computer Communication and Processing (ICCP), 2009, Cluj-Napoca, Romania.

[42] Tsai Y-H, Yang M-H, Black MJ. (2016) Video Segmentation via Object Flow. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, Las Vegas, NV, USA.

[43] Mo F, Li J, Dan M, Liu T, Behr M. (2019) Implementation of controlling strategy in a biomechanical lower limb model with active muscles for coupling multibody dynamics and finite element analysis. Journal of biomechanics, 2019, 91: pp.51–60.

VIII. APPENDIX

TABLE A-I EXEMPLARY VIDEO FRAMES FOR DIFFERENT QUALITY LEVELS (OC: OCCLUSION, LC: LIGHT CONDITIONS, SR: SPATIAL RESOLUTION).

OC = Fully OC=Partly, LC=Bad OC=Partly, LC=Good,

SR=Low OC = Partly, LC=Good,

SR=High

OC=None, LC=Bad OC=None, LC=Good,

SR=Low OC=None, LC=Good,

SR=High

IRC-20-37 IRCOBI conference 2020

243

Page 14: IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

TABLE A-II REPRESENTATIVE FRAMES EXTRACTED FROM VIDEO 1.

Frame 1 Frame 2 Frame 3 Frame 4 Frame 5

TABLE A-III 3D JOINT ANGLES OF THE HIP, ANKLE, SHOULDER AND WRIST FOR FIVE CONSECUTIVE FRAMES EXTRACTED FROM VIDEO 1. EACH

ENTRY REPRESENTS THE ANGLES (ALPHA BETA GAMMA) BETWEEN THE LOCAL COORDINATE SYSTEM AND THE GLOBAL COORDINATE SYSTEM IN DEGREE [°].

Joint Frame 1 Frame 2 Frame 3 Frame 4 Frame 5

J1 X 53.8 125.1 124.5 67.1 133.2 128.1 69.4 127.5 135.4 89.7 134.1 135.9 97.7 112. 156.6 Y 75.5 37.1 123.3 86.7 46.7 136.5 96.1 43.1 132.5 109.7 47.5 131. 104.2 24.5 109.6 Z 39.9 79.5 52.1 23.1 75.7 72.2 21.5 71.8 78.9 19.7 75.8 103.3 16.2 79.6 102.3

J2 X 129.9 62.6 52.1 116.5 56.2 45.5 112.9 60.5 38.9 92.2 70.9 19.2 82.3 68. 23.4 Y 57.9 32.5 94.4 53.2 39.8 103.2 55.4 37.5 102.6 54.4 39.1 104.1 56.6 42.1 112.5 Z 123.6 73.9 141.7 131.7 71.5 132.5 136.5 69.1 126. 144.3 57.3 102.7 145.5 56.2 96.1

J7 X 61.2 101. 148.8 90.9 98.2 171.8 93.7 105.2 164.3 108.4 115.7 147.5 127.1 95.5 142.4 Y 82.6 11.2 98.4 91.4 8.3 98.2 95.4 15.8 104.8 94.2 26.1 115.7 84.3 12.6 101.2 Z 29.9 92.3 60.2 1.7 88.7 91.1 6.5 85.8 95. 18.9 94.3 108.4 37.6 101.3 125.3

J8 X 167.3 79.5 82.9 153.3 79.8 65.5 146.3 74.6 60.9 121.6 69.4 39.1 82.1 84.3 9.8 Y 80. 10.8 94.1 79.1 10.9 90.6 74.8 15.9 94.4 64.9 26.2 96.9 59.6 32. 99. Z 97.8 92.8 171.7 114.1 86. 155.5 119.2 86.4 150.5 137.6 74.5 128.2 148.3 58.6 86.2

J16 X 78.7 54.9 142.6 77.1 68. 154.1 74.3 85.4 163.6 70.9 132. 131.8 79.5 114.6 153. Y 164.3 74.5 92.5 157.4 67.9 94.2 152.4 65.8 102.5 132.6 70.9 131.3 135.7 55.6 114.2 Z 79.2 39.4 52.7 71.9 32.1 64.5 67.9 24.7 79.5 48.7 48.2 110.3 47.6 44.5 101.2

J17 X 122.5 125.9 52.7 113.3 102.3 26.7 104.8 92.8 15. 52.6 65.2 47.5 65.2 75.5 29.3 Y 76.6 53.9 39.2 66.5 33.3 67.9 62.1 29.8 80.4 70. 51.1 134.3 57.5 45.1 117.5 Z 144.2 56.2 100.4 145.8 59.6 104.3 147.8 60.3 101.4 135.8 49.1 76.3 137. 48.5 80.7

J20 X 124.4 145.6 89.5 122.7 145. 79. 117.7 149.4 78.1 119.2 149.6 82.4 148. 120.4 99.2 Y 128.9 65. 130.7 136.9 69.9 126.2 137.6 75.6 128.8 108.9 87.7 160.9 84.1 82. 170.1 Z 57.7 111.9 139.3 65.3 117.3 141.6 60.7 116.3 138.7 35.9 120.3 107.4 58.6 148.4 93.7

J21 X 16.1 83.7 75.3 19.9 91.2 70.2 16.3 103.1 80.4 60. 144.7 106.9 58.2 133.7 119.7 Y 87.7 31.6 121.5 73.3 54.2 139.4 74.2 52.9 138.5 81.5 65.8 154.1 92.2 55.7 145.6 Z 105.9 59.2 35.5 100.5 35.8 56.2 86. 40.1 50.1 31.4 66. 71. 31.9 63. 74.3

IRC-20-37 IRCOBI conference 2020

244

Page 15: IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

TABLE A-IV QUANTITATIVE 3D PEDESTRIAN DESCRIPTIONS FOR MULTIPLE FRAMES EXTRACTED FROM VIDEO 1. THE X-Y-Z COORDINATE FOR

EACH JOINT OF THE RECONSTRUCTED POSE GIVEN IN MM. Joint Frame 1 Frame 2 Frame 3 Frame 4 Frame 5

J0 -2 -245 28 -2 -242 28 -2 -243 28 -3 -246 28 -3 -250 29 J1 39 -334 69 42 -330 66 44 -332 62 50 -338 24 50 -343 25 J2 -45 -334 -24 -49 -330 -22 -52 -332 -18 -75 -332 18 -76 -336 15 J3 42 -124 2 40 -126 0 37 -125 -3 14 -123 -21 18 -123 -20 J4 -97 -615 307 -50 -554 381 15 -583 367 182 -591 297 156 -664 222 J5 -266 -655 -7 -313 -612 50 -305 -616 71 -326 -618 102 -324 -617 118 J6 -16 -1 51 -12 -3 56 5 8 50 31 23 15 28 23 13 J7 -134 -1047 331 4 -986 390 114 -990 481 197 -913 577 101 -1037 424 J8 -361 -1070 -19 -418 -1024 15 -440 -1021 80 -532 -981 184 -515 -900 361 J9 -41 53 67 -34 53 73 -8 69 66 35 89 31 31 88 29 J10 -208 -1090 445 13 -1029 525 121 -1022 618 227 -962 704 149 -1076 551 J11 -502 -1095 18 -546 -1064 76 -560 -1059 157 -614 -1031 294 -494 -951 496 J12 -111 253 93 -96 256 95 -39 275 80 28 291 21 31 297 23 J13 -18 171 135 -5 172 136 41 185 115 103 202 20 105 200 30 J14 -131 153 26 -121 155 35 -82 179 31 -40 203 31 -42 207 21 J15 -139 308 170 -123 319 167 -62 340 152 48 362 85 47 368 89 J16 75 228 209 92 230 201 145 246 158 210 261 22 226 239 41 J17 -226 175 -48 -220 177 -27 -178 221 -15 -126 255 69 -143 257 41 J18 321 171 233 330 151 232 372 162 220 398 185 177 425 99 119 J19 -298 36 -252 -340 -37 -115 -312 3 3 -225 79 236 -276 64 162 J20 526 42 345 555 43 329 596 72 341 480 150 432 376 58 378 J21 -293 -171 -88 -415 -173 104 -397 -158 203 -279 -31 478 -280 -90 384 J22 569 -28 380 608 -18 367 655 20 382 486 106 509 339 38 458 J23 -259 -235 -33 -412 -224 179 -405 -227 262 -282 -102 536 -269 -160 442

IRC-20-37 IRCOBI conference 2020

245

Page 16: IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

TABLE A-V REPRESENTATIVE FRAMES EXTRACTED FROM VIDEO 2.

Frame 1 Frame 2 Frame 3 Frame 4 Frame 5

TABLE A-VI 3D JOINT ANGLES OF THE HIP, ANKLE, SHOULDER AND WRIST FOR FIVE CONSECUTIVE FRAMES EXTRACTED FROM VIDEO 2. EACH

ENTRY REPRESENTS THE ANGLES (ALPHA BETA GAMMA) BETWEEN THE LOCAL COORDINATE SYSTEM AND THE GLOBAL COORDINATE SYSTEM IN IN DEGREE [°].

Joint Frame 1 Frame 2 Frame 3 Frame 4 Frame 5

J1 X 141.5 110.7 120.8 142.2 111.3 119.6 143.1 113.8 116.4 145.4 112.3 115. 146.0 112.4 114.1 Y 119.0 30.5 81.5 120.3 32.6 79.1 122.2 35.1 77.6 120.8 35.3 74.6 121.7 38.6 70.3 Z 67.1 68.7 147.8 69.7 66.5 148.1 73.9 65.8 150.4 75.8 64.2 150.0 79.0 60.4 148.0

J2 X 37.2 71.4 59.1 34.1 74.5 60.5 31.2 74.6 63.6 28.0 77.6 65.3 25.9 79.8 66.5 Y 101.7 19.7 105.6 100.2 16.5 102.8 101.1 16.4 101.9 98.8 13.5 100.2 97.3 11.3 98.6 Z 124.7 83.6 35.4 122.1 84.5 32.7 118.7 84.8 29.3 116.4 84.7 27.0 114.7 85.2 25.2

J7 X 177.3 89.1 92.6 177.4 90.9 92.4 177.6 87.9 89.0 171.3 82.3 85.9 161.1 74.1 80.2 Y 89.6 10.2 79.8 91.4 12.6 77.5 87.7 15.5 74.7 81.4 20.5 71.6 71.3 30.0 67.5 Z 87.3 79.8 169.4 87.8 77.4 167.2 90.4 74.6 164.6 91.4 71.1 161.1 92.4 65.3 155.2

J8 X 57.7 102.9 35.4 57.9 101.2 34.4 56.3 102.9 36.7 57.6 107.0 37.6 56.6 108.8 39.7 Y 83.0 12.9 79.2 87.1 11.7 78.6 85.7 13.4 77.3 82.2 17.1 74.9 80.4 18.9 73.9 Z 146.7 90.0 56.7 147.8 93.6 58.0 146.0 93.6 56.2 146.4 91.9 56.5 144.9 91.3 54.9

J16 X 141.5 104.4 124.7 151.2 99.1 117.1 145.1 108.9 118.1 148. 109.4 114.8 145.6 112.4 114.7 Y 118.1 33.3 73.6 112.0 35.2 64.1 121.7 41.2 66.6 118.5 34.7 71.9 119.4 32.0 78.6 Z 66.1 60.7 140.5 72.4 56.4 140.9 77.0 55.0 141.9 76.0 62.5 148.5 73.8 68.4 152.5

J17 X 21.7 81.9 70.0 23.2 68.0 83.1 40.0 51.8 100.1 42.5 51.5 105.1 35.1 60.4 107.1 Y 86.3 32.0 121.7 102.0 43.7 131.2 123.0 61.3 133.4 129.3 59.8 126.0 124.5 52.6 123.9 Z 111.4 59.3 38.8 109.5 54.5 42.1 109.9 51.6 45.1 103.6 53.2 40.1 95.6 51.5 39.1

J20 X 132.9 122.3 60.0 142.0 111.1 60.0 146.2 118.5 73.3 144.1 114.8 65.9 129.2 116.7 50.8 Y 137.1 59.2 116.6 127.8 67.8 133.9 119.5 67.0 141.1 123.2 72.7 141.4 140.7 71.0 122.9 Z 91.0 132.2 137.8 86.5 148.5 121.3 75.0 142.0 124.0 78.0 149.0 118.1 87.8 146.3 123.6

J21 X 100.2 110.4 157.0 104.1 120.2 146.0 107.6 142.2 122.2 105.3 143.5 122. 114.3 140.1 119.4 Y 140.5 51.0 95.2 164.7 77.7 81.1 159.4 81.9 71.2 161.5 83.4 72.8 155.7 70.2 76.4 Z 52.3 46.1 112.3 84.4 33.1 122.5 100.3 53.4 141.5 100.1 54.3 142.4 90.9 57.0 147.0

IRC-20-37 IRCOBI conference 2020

246

Page 17: IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

TABLE A-VII QUANTITATIVE 3D PEDESTRIAN DESCRIPTIONS FOR MULTIPLE FRAMES EXTRACTED FROM VIDEO 2. THE X-Y-Z

COORDINATE FOR EACH JOINT OF THE RECONSTRUCTED POSTE GIVEN IN MM. Joint Frame 1 Frame 2 Frame 3 Frame 4 Frame 5

J0 -1 -243 29 -1 -244 29 -1 -244 29 -1 -244 29 -1 -246 29 J1 25 -321 -36 27 -322 -35 28 -321 -35 29 -321 -34 31 -323 -34 J2 -24 -341 74 -27 -342 73 -28 -341 73 -29 -341 72 -31 -343 70 J3 -54 -123 14 -55 -122 13 -55 -123 13 -55 -123 13 -56 -123 12 J4 225 -640 -122 246 -627 -126 265 -614 -123 261 -616 -131 285 -594 -145 J5 69 -685 221 37 -703 191 32 -707 180 9 -713 171 -17 -722 144 J6 -36 14 40 -37 16 32 -35 17 29 -31 16 28 -26 17 26 J7 206 -1057 -216 222 -1034 -258 195 -1009 -275 136 -987 -306 79 -914 -345 J8 -30 -1092 189 -8 -1118 144 -8 -1119 114 -40 -1119 80 -51 -1131 60 J9 -33 71 49 -32 75 41 -27 77 37 -19 76 35 -7 76 31 J10 333 -1110 -231 347 -1092 -262 317 -1071 -291 251 -1062 -313 156 -1027 -348 J11 47 -1165 292 74 -1186 246 75 -1183 218 29 -1199 184 10 -1217 163 J12 -56 290 70 -48 289 72 -39 288 64 -18 288 57 5 287 44 J13 -7 193 -12 -11 197 -16 -9 196 -23 6 192 -27 29 186 -34 J14 -85 182 125 -73 183 125 -61 187 119 -45 191 115 -29 196 105 J15 19 351 81 35 340 76 46 337 64 69 334 50 90 334 36 J16 63 210 -113 46 228 -122 49 230 -128 66 215 -135 103 194 -134 J17 -122 198 232 -93 207 233 -61 224 225 -38 229 218 -17 238 205 J18 199 -2 -163 157 29 -229 197 54 -236 185 18 -237 199 -25 -218 J19 -133 -27 362 -30 14 398 97 99 402 144 96 364 152 77 340 J20 388 -143 -43 288 -89 -39 288 -78 -29 294 -82 -23 396 -117 -69 J21 69 -200 350 220 -45 336 344 45 317 402 53 305 396 -26 293 J22 437 -196 7 309 -134 35 293 -131 42 311 -119 57 453 -148 -7 J23 132 -262 332 296 -76 297 417 8 278 483 21 277 465 -82 274

IRC-20-37 IRCOBI conference 2020

247

Page 18: IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

TABLE A-VIII

REPRESENTATIVE FRAMES EXTRACTED FROM VIDEO 3. Frame 1 Frame 2 Frame 3 Frame 4 Frame 5

TABLE A-IX 3D JOINT ANGLES OF THE HIP, ANKLE, SHOULDER AND WRIST FOR 5 CONSECUTIVE FRAMES EXTRACTED FROM VIDEO 2. EACH

ENTRY REPRESENTS THE ANGLES (ALPHA BETA GAMMA) BETWEEN THE LOCAL COORDINATE SYSTEM AND THE GLOBAL COORDINATE SYSTEM IN IN DEGREE [°].

Joint Frame 1 Frame 2 Frame 3 Frame 4 Frame 5

J1 X 25.9 99.8 113.7 27.2 88.3 117.1 26.0 87.2 115.8 41.5 116.2 60.4 70.9 156.4 76.7 Y 85.8 14.8 104.2 98.0 14.5 102.1 99.9 17.7 104.5 60.7 29.4 87.9 29.1 67.3 72.8 Z 64.5 79.0 28.1 64.2 75.6 30.1 66.2 72.6 30.1 116.5 77.6 29.7 111.1 84.0 22.0

J2 X 145.3 64.1 68.6 145.0 66.6 65.7 150.8 75.2 65.4 146.0 83.4 123.2 141.9 72.6 122.6 Y 58.6 33.4 79.9 60.5 31.0 81.3 69.4 23.2 79.8 91.5 14.1 76.0 83.5 21.7 69.4 Z 103.2 70.5 156.1 107.1 70.9 154.0 109.8 72.5 153.1 56.0 77.6 143.2 52.7 77.6 139.9

J7 X 67.9 68.1 148.0 75.7 64.5 150.3 77.5 62.2 149.0 59.4 32.6 79.8 49.5 46.7 70.8 Y 108.5 24.2 74.9 118.8 33.7 74.0 125.2 39.7 74.1 126.3 60.5 129.6 105.5 52.4 138.2 Z 29.5 80.2 62.5 32.7 69.5 65.7 38.0 64.2 64.2 128.7 77.3 41.5 135.4 66.7 54.6

J8 X 161.5 108.0 86.1 155.6 106.3 72.4 151.7 116.0 79.6 92.2 124.5 145.4 68.8 132.3 130.2 Y 105.6 25.8 69.9 99.3 23.4 68.8 111.6 29.2 71.3 110.9 39.0 121.2 123.3 66.1 136.9 Z 99.6 72.1 159.5 112.3 73.7 151.9 107.4 77.5 158.4 21.0 74.1 103.4 41.1 51.8 102.9

J16 X 17.0 106.5 94.2 37.3 127.1 86.4 43.6 133.6 88.4 43.4 131.6 79.9 33.9 122.8 82.4 Y 75.5 22.9 107.3 54.4 43.1 111.0 53.1 52.4 121.1 68.2 80.6 156.1 65.7 64.3 143.3 Z 81.2 74.5 17.9 80.2 71.3 21.3 70.1 67.0 31.2 54.7 43.1 68.6 67.8 44.0 54.3

J17 X 171.1 86.4 81.9 153.9 68.5 76.1 146.4 63.4 71.0 114.1 43.1 56.7 116.5 49.2 52.3 Y 81.4 53.0 38.3 65.6 58.6 41.7 61.1 72.5 34.7 53.5 104.1 40.0 48.6 102.7 44.1 Z 92.0 37.2 127.2 81.4 39.6 128.3 74.3 32.7 117.9 46.1 50.3 109.5 52.9 43.5 109.4

J20 X 84.0 113.1 156.1 98.3 116.8 151.7 126.3 106.0 139.2 133.4 136.5 88.1 81.3 157.7 110.3 Y 23.9 66.5 93.8 12.3 101.9 93.2 44.7 128.4 109.2 44.0 133.3 96.4 55.9 68.3 137.8 Z 67.0 146.0 66.4 99.0 150.2 61.9 112.4 137.1 55.7 84.0 93.0 6.7 35.5 94.7 54.9

J21 X 88.6 144.1 54.2 110.2 143.5 61.0 150.0 114.9 74.5 141.5 126.4 100.8 111.2 137.2 125.1 Y 43.4 65.4 56.9 32.2 92.4 57.9 60.2 132.6 57.1 52.3 142.1 93.0 21.3 108.0 101.0 Z 133.3 65.6 53.2 114.0 53.6 46.0 92.7 52.8 37.3 96.7 99.0 11.2 92.2 127.2 37.3

IRC-20-37 IRCOBI conference 2020

248

Page 19: IRC-20-37 IRCOBI conference 2020 · 2020. 7. 25. · IRC-20-37 IRCOBI conference 2020 231. evaluate ground impact patterns [15–17]. Pedestrian behaviour prior to crash has however

TABLE A-X QUANTITATIVE 3D PEDESTRIAN DESCRIPTIONS FOR MULTIPLE FRAMES EXTRACTED FROM VIDEO3. THE X-Y-Z

COORDINATE FOR EACH JOINT OF THE RECONSTRUCTED POSE GIVEN IN MM. Joint Frame 1 Frame 2 Frame 3 Frame 4 Frame 5

J0 -2 -214 26 -2 -214 26 -2 -215 26 -2 -215 26 -2 -214 26 J1 -42 -315 67 -43 -314 66 -44 -314 65 -44 -314 64 -40 -314 69 J2 49 -292 -38 50 -292 -39 51 -292 -38 52 -292 -37 47 -292 -40 J3 15 -110 56 14 -111 56 14 -112 57 13 -113 58 16 -111 56 J4 -167 -600 -117 -182 -584 -133 -173 -610 -105 -158 -635 -70 -159 -634 -59 J5 72 -640 -167 47 -632 -191 36 -629 -199 -1 -618 -212 0 -636 -176 J6 -7 17 60 -9 16 59 -12 16 59 -18 15 56 -18 15 59 J7 -135 -977 -194 -140 -969 -143 -108 -990 -74 -82 -1015 -49 -69 -1012 -59 J8 177 -1011 -137 179 -977 -70 171 -936 -5 115 -948 -45 57 -1009 -90 J9 -43 60 46 -46 60 46 -50 59 44 -58 55 38 -61 54 44 J10 -209 -999 -293 -197 -1005 -249 -163 -1035 -178 -151 -1059 -145 -139 -1064 -150 J11 152 -1042 -259 176 -1045 -178 183 -1012 -107 103 -1018 -152 29 -1048 -210 J12 -56 278 53 -61 276 47 -70 274 47 -99 265 56 -104 263 56 J13 -75 180 124 -76 183 121 -88 179 118 -116 163 117 -114 164 122 J14 -6 188 -13 -11 186 -18 -16 187 -16 -31 189 -8 -41 186 -10 J15 -120 317 56 -127 314 47 -136 314 44 -167 304 50 -174 294 47 J16 -112 178 208 -111 191 206 -130 187 200 -170 157 192 -163 162 200 J17 24 208 -97 22 204 -100 11 213 -97 0 230 -84 -20 223 -92 J18 -148 -29 331 -171 3 353 -209 6 349 -266 -44 308 -227 -32 343 J19 18 27 -265 36 24 -264 -32 75 -298 10 133 -313 14 116 -315 J20 -293 -114 175 -335 -148 290 -376 -152 312 -429 -211 282 -334 -179 201 J21 -193 -47 -349 -186 -14 -349 -271 84 -339 -202 18 -345 -183 -20 -344 J22 -325 -158 114 -365 -216 256 -411 -221 286 -465 -281 258 -349 -240 148 J23 -259 -91 -364 -260 -46 -356 -350 67 -341 -260 -39 -347 -234 -84 -345

IRC-20-37 IRCOBI conference 2020

249