9
Supporting Information Lukeman et al. 10.1073/pnas.1001763107 SI Text Location and Materials. Data were gathered March 112, 2008, by photography from an elevated promenade at Canada Place (Bur- rard Inlet, Vancouver) overlooking the inlet where overwintering surf scoters were foraging. We used a Nikon D70s DSLR camera, Manfrotto 190XPROB tripod, and Nikon AF-S Nikkor 1870mm ED lens xed at the maximal focal length (70 mm). Images were taken in continuous autofocus mode, at 3 frames per second (fps) at a resolution of 1,000 × 1,504 pixels with aperture at f4.5, and ex- posure times of 1/8,0001/250 s. We refer to a single continuous recording session as a sequence.Experimental Technique. The camera elevation, angle, and image frame were xed during each sequence, and quantied accurately. The lower edge of the viewnder was always aligned with edge of a stationary dock (Fig. S2A). The camera was red continuously at 3 fps until individuals had completely left the frame (either through diving, or by swimming outside the image region). Calibration and Testing. Simple trigonometric conversions were used to transform positions within the camera frame to real-world positions. Additionally, as commercial cameras (such as the one used in this study) are not designed for precise data collection, we tested the manufacturers specications for accuracy. Using known geometry (Fig. S2A), we nd the camera axis angle: θ ¼ arctan 9:0 17:5 ¼ 0:475 rad ¼ 27:2 o : The Nikon D70s image sensor (23.7 × 15.6 mm), and the lens (manufacturers specied focal length of f = 70 mm), imply an angle-of-view, ϕ ¼ 2arctan d v 2f ¼ 2arctan 15:6mm 140mm ¼ 0:222 rad ¼ 12:7 o : where d v is the vertical dimension of the image sensor. Our own calibration using grids (Fig. S2C) found the true angle-of-view of 0.235 rad = 13.5°. Moreover, in image postprocessing, the bot- tom 28 pixels (of 1,000) are removed to eliminate the edge of the dock. The image size is therefore 1,504 × 972 pixels, and the angle-of-view is ϕ = 0.972·0.235 = 0.229 rad (or 13.1°). To convert coordinates on an image frame to real-world positions, we apply a vertical transformation (correcting for the camera axis angle), and a horizontal transformation (projective perspective distortion: Lines that are parallel in reality will converge in an image taken at some nonzero camera axis angle). Let y denote (actual) distance from dock edge here given in units of image pixels, b ϕ the angle corresponding to that distance (Fig. S2A). The range 0b ϕϕ is mapped linearly onto the images pixel range [0, 972]. Then y is obtained from y ¼ 972 ϕ tan θ þ pϕ 972 tanðθÞ ; [1] where p ¼ 972 b ϕ=ϕ is the vertical pixel, and we readily nd the maximal vertical pixel in the image to be L = 1,421. To correct for horizontal perspective distortion in the images (symmetric about a vertical line through the center of the image, and linear with distance from this line to the image edge), we photo- graphed a grid at the same camera axis angle θ. The ratio of the actual measured lengths, (top edge)/(bottom edge) of the image frame, giving the maximal perspective distortion, was found to be 1.197. Using this value, a calibration matrix was determined to map horizontal image coordinates to real coordinates (Fig. S2B). The combined vertical and horizontal transformations were rechecked by using the known grid (Fig. S2C). As shown in Fig. S2D, the transformed positions accurately reconstruct the original grid. A very small pincushion distortion (image aberration that compresses the center of the eld) is assumed to be negligible. We tested the camera frame rate by taking 60 consecutive images of a stopwatch in continuous mode, at a shutter speed of 1/250 s, using the same image resolution as in the eld study. We found that the time between captured images was 1/3 s (built-in frame rate) + 1/250 s (shutter speed) with error less than 1%. By restricting the photography to low resolution we ensured rapid data writing to the camera storage buffer so that the accurate frame rate was always maintained. Postprocessing Images. Data were stored as a series of JPEG images (e.g., Fig. S3 A and B), then processed in MATLAB to identify individual ducks and determine their positions. A set of (x, y) coordinates then characterizes every individual in every frame. These positions were linked frame-by-frame, to create trajectories as described further on. Extracting Positions. Image processing was performed in MATLAB, primarily by using the im2bw, bwmorph, imopen, and bwlabel routines. JPEG images are stored as an m × n × 3 matrix; i.e., three color layers (red, green, blue, each on a 0255 scale) with m = 1,504 and n = 972 for the horizontal and vertical pixel coordinates. Individual ducks in groups (Fig. 2A) were identied by using a color criterion (red > blue or green) and thresholded to create white on black images (Fig. 2B). Morphological operations on the thresholded image were used to identify and reinforce objects. Error correction was achieved by manual comparison of auto- matically generated ock images with an overlay of the original images. An image editing user interface (purpose-built in MAT- LAB for this study) was used to x rare errors where needed, e.g., joining or separating objects, marking missed individuals, etc. (The well-spaced ocks made the error rate quite low.) The center of mass of each object was calculated (Fig. 2C) and transformed to real pixel positions as described above. Examples of re- constructed and transformed positions are shown as blue circles in Fig. 2D and Fig. S3 C and D. Tracking. The sets of positions for individuals in each frame were were exported to ImageJ and the Particle Trackerplugin was used to associate a position in one frame with the corresponding position in the next frame. This generated trajectories of individuals that were imported to MATLAB, where speed and heading were calculated. To adapt the tracking algorithm to our dataset (highly polarized groups with directional bias, as distinct from loose aggregates of brownian particles) we used the following predictor-corrector procedure: (i) Trajectories calculated with particle-tracking software were ltered, retaining segments whose changes in position and heading were below some realistic threshold. Unrealistic portions were discarded. (ii) The velocity of individuals was assigned either directly from such trajectories or using the mean velocity of neigh- Lukeman et al. www.pnas.org/cgi/content/short/1001763107 1 of 9

Supporting Information - StFXpeople.stfx.ca/rlukeman/PNASLukemanSI.pdf · v is the vertical dimension of the image sensor. Our own calibration using grids (Fig. S2C) found the true

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Supporting Information - StFXpeople.stfx.ca/rlukeman/PNASLukemanSI.pdf · v is the vertical dimension of the image sensor. Our own calibration using grids (Fig. S2C) found the true

Supporting InformationLukeman et al. 10.1073/pnas.1001763107SI TextLocation and Materials. Data were gathered March 1–12, 2008, byphotography from an elevated promenade at Canada Place (Bur-rard Inlet, Vancouver) overlooking the inlet where overwinteringsurf scoters were foraging. We used a Nikon D70s DSLR camera,Manfrotto 190XPROB tripod, and Nikon AF-S Nikkor 18–70mmED lens fixed at the maximal focal length (70 mm). Images weretaken in continuous autofocusmode, at 3 frames per second (fps) ata resolution of 1,000 × 1,504 pixels with aperture at f4.5, and ex-posure times of 1/8,000–1/250 s. We refer to a single continuousrecording session as a “sequence.”

Experimental Technique. The camera elevation, angle, and imageframe were fixed during each sequence, and quantified accurately.The lower edge of the viewfinder was always aligned with edge ofa stationary dock (Fig. S2A). The camera was fired continuouslyat 3 fps until individuals had completely left the frame (eitherthrough diving, or by swimming outside the image region).

Calibration and Testing. Simple trigonometric conversions wereused to transform positions within the camera frame to real-worldpositions. Additionally, as commercial cameras (such as the oneused in this study) are not designed for precise data collection, wetested the manufacturer’s specifications for accuracy. Usingknown geometry (Fig. S2A), we find the camera axis angle:

θ ¼ arctan�9:017:5

�¼ 0:475 rad ¼ 27:2o:

The Nikon D70s image sensor (23.7 × 15.6 mm), and the lens(manufacturer’s specified focal length of f = 70 mm), imply anangle-of-view,

ϕ ¼ 2arctan�dv2f

�¼ 2arctan

�15:6mm140mm

�¼ 0:222 rad ¼ 12:7o:

where dv is the vertical dimension of the image sensor. Our owncalibration using grids (Fig. S2C) found the true angle-of-view of0.235 rad = 13.5°. Moreover, in image postprocessing, the bot-tom 28 pixels (of 1,000) are removed to eliminate the edge of thedock. The image size is therefore 1,504 × 972 pixels, and theangle-of-view is ϕ = 0.972·0.235 = 0.229 rad (or 13.1°).To convert coordinates on an image frame to real-world

positions, we apply a vertical transformation (correcting for thecamera axis angle), and a horizontal transformation (projectiveperspective distortion: Lines that are parallel in reality willconverge in an image taken at some nonzero camera axis angle).Let y denote (actual) distance from dock edge here given in unitsof image pixels, bϕ the angle corresponding to that distance (Fig.S2A). The range 0≤bϕ≤ϕ is mapped linearly onto the image’spixel range [0, 972]. Then y is obtained from

y ¼ 972ϕ

�tan

�θþ pϕ

972

�− tanðθÞ

�; [1]

where p ¼ 972bϕ=ϕ is the vertical pixel, and we readily find themaximal vertical pixel in the image to be L = 1,421.To correct for horizontal perspective distortion in the images

(symmetric abouta vertical line through the centerof the image, andlinear with distance from this line to the image edge), we photo-graphed a grid at the same camera axis angle θ. The ratio of theactual measured lengths, (top edge)/(bottom edge) of the image

frame, giving the maximal perspective distortion, was found to be1.197. Using this value, a calibration matrix was determined to maphorizontal image coordinates to real coordinates (Fig. S2B).The combined vertical and horizontal transformations were

rechecked by using the known grid (Fig. S2C). As shown in Fig.S2D, the transformed positions accurately reconstruct the originalgrid. A very small pincushion distortion (image aberration thatcompresses the center of the field) is assumed to be negligible.We tested the camera frame rate by taking 60 consecutive

images of a stopwatch in continuous mode, at a shutter speed of1/250 s, using the same image resolution as in the field study. Wefound that the time between captured images was 1/3 s (built-inframe rate) + 1/250 s (shutter speed) with error less than 1%. Byrestricting the photography to low resolution we ensured rapiddata writing to the camera storage buffer so that the accurateframe rate was always maintained.

Postprocessing Images. Data were stored as a series of JPEGimages (e.g., Fig. S3 A and B), then processed in MATLABto identify individual ducks and determine their positions. A setof (x, y) coordinates then characterizes every individual in everyframe. These positions were linked frame-by-frame, to createtrajectories as described further on.

Extracting Positions. Image processing was performed in MATLAB,primarily by using the im2bw, bwmorph, imopen, and bwlabelroutines. JPEG images are stored as anm × n × 3matrix; i.e., threecolor layers (red, green, blue, each on a 0–255 scale) withm=1,504and n = 972 for the horizontal and vertical pixel coordinates.Individual ducks in groups (Fig. 2A) were identified by using

a color criterion (red > blue or green) and thresholded to createwhite on black images (Fig. 2B). Morphological operations on thethresholded image were used to identify and reinforce objects.Error correction was achieved by manual comparison of auto-matically generated flock images with an overlay of the originalimages. An image editing user interface (purpose-built in MAT-LAB for this study) was used to fix rare errors where needed, e.g.,joining or separating objects, marking missed individuals, etc.(The well-spaced flocksmade the error rate quite low.) The centerof mass of each object was calculated (Fig. 2C) and transformedto real pixel positions as described above. Examples of re-constructed and transformed positions are shown as blue circles inFig. 2D and Fig. S3 C and D.

Tracking. The sets of positions for individuals in each frame werewere exported to ImageJ and the “Particle Tracker” plugin wasused to associate a position in one frame with the correspondingposition in the next frame. This generated trajectories of individualsthat were imported to MATLAB, where speed and heading werecalculated.To adapt the tracking algorithm to our dataset (highly polarized

groups with directional bias, as distinct from loose aggregates ofbrownian particles) we used the following predictor-correctorprocedure:

(i) Trajectories calculated with particle-tracking software werefiltered, retaining segments whose changes in position andheading were below some realistic threshold. Unrealisticportions were discarded.

(ii) The velocity of individuals was assigned either directlyfrom such trajectories or using the mean velocity of neigh-

Lukeman et al. www.pnas.org/cgi/content/short/1001763107 1 of 9

Page 2: Supporting Information - StFXpeople.stfx.ca/rlukeman/PNASLukemanSI.pdf · v is the vertical dimension of the image sensor. Our own calibration using grids (Fig. S2C) found the true

bors in a local region in cases where the trajectories werenot acceptable.

(ii) The entire sequence was retracked, using the estimatedvelocities as predictions for the following step, and build-ing the trajectory based on the closest individual to thepredicted location.

This process resulted in essentially complete trajectory iden-tification (<1% of individuals excluded) and considerably im-proved velocity estimation. (Compare Fig. S4B, the final result,with Fig. S4A, before this retracking step). Resulting correctedtrajectories were considerably more accurate and longer, withfewer gaps. Sample trajectories for polarized groups (Fig. S5 Aand B) and for convergence and splitting of groups (Fig. S5 Cand D) are given.

Edge Effects. Individuals at the edge of a flock either play uniqueroles (e.g., as leaders in front) or experience effects (such asexposure to danger) different from those of interior flockmates.Here, our primary purpose was to understand the behavior ofa typical interior individual, defined as having neighbors in each ofthe four quadrants within a local disk. Others, identified as “edge”individuals were excluded from this analysis.

Processed Events. We categorize data into sequences, referring toa series of images taken in succession of a particular group move-ment (approach, return, etc.). A total of 14 sequences were pro-cessed, with frames per sequence ranging from 22 to 137. In all, 828frames were analyzed, and a total of 75,269 positions were analyzedover all frames (42,599 after eliminating edge individuals).

Units. Image positions are first recorded in units of image pixels,then transformed for perspective as described above, resulting inunits of “real pixels.” By manually measuring body length (BL) oftest individuals in the bottom center (where no spatial trans-formation occurs) of a number of frames, an estimate for 1 BL,in units of real pixels, is calculated to be 46 pixels = 1 BL. Realpixel units are converted to units of BL by using this conversionfactor. The body length of an adult surf scoter is ≈0.44 m.

Body Alignment Versus Velocity. Although individual velocity vec-tors are generally aligned with the body axis, the presence ofcurrents can lead to discrepancies in these directions. We cor-rected for water currents by using the direction and speed ofscoter excreta as “intrinsic fluid tracers.” Currents were found tobe essentially parallel to the dock, and any component perpen-dicular to the dock was small enough to neglect. To quantifycurrents, we use two distinct indicators. The first (below) relieson body alignment vectors and fluid tracers. The second is a di-rect measure of current (fluid tracer velocity). The results ofthese distinct approaches agreed, affirming that the methodologyis sound.

Measuring Body Alignment. We determine the breakdown of theobserved velocity v→obs ¼ sðvx; vyÞ (where s is speed), into a sum ofdrift due to current and “true velocity” (parallel to body axis) asfollows: Let v→b ¼ sðbx; byÞ be the “true” velocity, and letc→¼ ðcx; cyÞ be the current velocity, so that v→b þ c→ ∥ v→obs; (where | |denotes parallel vectors). In component form, and assuming cy =0 for currents along the dock,�

sbx þ cxsby

�∥�svxsvy

�;

implying that

cx ¼s�byvx − bxvy

�vy

: [2]

We computed bv→obs and individual speed bs from trajectories, andbody alignment from image frames. MATLAB was used to com-pute cx via Eq. 2. In this analysis, we assume currents to be constantthroughout a given sequence (the longest of which lasts ≈45 s).

Measuring Currents Directly.Quantifying currents directly from thevelocity of an intrinsic fluid tracer is less prone to measurementerror. A succession of frames is analyzed (Fig. S6A) to calculatethe horizontal tracer velocity cx, obs (BL/s) for each excreta, andthe result compared with cx calculated above. A linear least-squares fit to cx, obs vs. cx (Fig. S6B) shows a high correspondence(correlation coefficient r = 0.93, slope of linear fit β = 1.02).Based on this agreement, we correct all velocities by subtractingthe fluid drift component, i.e., subtract (cx, obs, 0) from all ve-locities for the given sequence.

Density and Alignment Distributions. Averaging relative neighborpositions over the entire dataset gives a 2D plot (Fig. 2A) thatcharacterizes the likelihood of finding a neighbor at a given lo-cation. The distribution (discretized with bins of size Δx = Δy =0.048 BL and smoothed with radius 0.14 BL) is normalized tohave an average value of 1. The distribution of neighbor devia-tions is constructed similarly, with the deviation value (differencebetween focal individual heading and averaged neighbor head-ing) at each bin determined by an average over all neighborsfound within that bin.

Model. With the many models for collective motion of animalgroups, there is a combinatorial proliferation due to the manypossible choices for assumed forces, interactions, decision hier-archies, and effect on individual motion. Although five particularmodels are reported here, we tested many others.The position x→i and velocity v→i for an individual i are governed by

x_→i ¼ v→i; [3]

v_→i ¼ f→i;aut þ f

→i; int þ ξ

→i; [4]

where f→i;int represents social forces (interaction with neighbors),

f→i;aut autonomous forces due to influences other than interaction(including environmental cues), and ξ

→i is (Gaussian) noise.

(These “forces” are based on a system of units where the massof an individual is scaled to be 1 unit.) A coordinate system isbased on the x axis being along and y axis normal to the edge ofthe fixed dock, used as reference.Interactions are implemented as a force imparting an acceler-

ation on individuals (distinct from constant-speed models whereinteractions influence only heading). A normalized force functiong(x) (such that |g(x)| is maximally 1, see Fig. S7A) accounts for bothattraction [where g(x) > 0] and repulsion [where g(x) < 0] asa function of the relative distance x between neighbors.We introduce the following notation: let dij ¼ jx→j − x→ij denote

relative position vector and u→ij ¼ ðx→j − x→iÞ=dij the correspondingunit vector from individual i to neighbor j. The borders of zonesfor repulsion, alignment, or attraction about a given individual aredefined by the radii rrep, ral, ratt shown in Fig. S7A.The corresponding zones are denoted R for 0 ≤ x ≤ rrep, AL forrrep ≤ x ≤ ral, and ATT for ral ≤ x ≤ ratt, where x is distance fromthe focal individual. For a given individual i, with nrep, nal, natt,respectively, the number of neighbors in a given zone we depictthe repulsion/alignment/attraction forces with a hierarchy of de-cisions as described below.Repulsion. f

→i;rep. The first decision is based on repulsion, taken to

be an average of influences of neighbors in zone R,

Lukeman et al. www.pnas.org/cgi/content/short/1001763107 2 of 9

Page 3: Supporting Information - StFXpeople.stfx.ca/rlukeman/PNASLukemanSI.pdf · v is the vertical dimension of the image sensor. Our own calibration using grids (Fig. S2C) found the true

f→i; rep ¼ 1

nrep∑nrep

j¼1g�dij�u→ij;  x

→j ∈ R: [5]

As shown in Fig. S7, g(x) < 0 in R, leading to acceleration awayfrom this distance-weighted average of neighbor positions in R.Attraction. f

→i;att. If nrep = 0, natt > 0 for individual i, then the re-

sulting attraction is

f→i; att ¼ 1

natt∑natt

j¼1g�dij�u→ij;  x

→j ∈ ATT: [6]

Because g(x) > 0 in ATT, individuals move toward the average(distance-weighted) position of neighbors in ATT.Alignment. f

→i;al. If nrep = 0, nal > 0, then an alignment force is

imposed with

f→

i; al ¼1nal

∑nal

j¼1

v→jjv→jj

;  x→j ∈ AL: [7]

If neighbors in zone AL are already aligned with individual i, thisforce is similar to a self-propulsion term directed along the axisof alignment. In absence of neighbors to align to (nal = 0), we set

f→i;al ¼ v→i

jv→ij:

Frontal interaction. f→i; front. The frontal interaction is an attraction/

repulsion interaction with a single, nearest neighbor x→j found ina frontal angular section with angle θ within radius ratt, given by

f→i; front ¼ gf

�dij�u→ij; [8]

where gf(x) is given in Fig. S7B. If no such neighbor is detected,then f

→i;front ¼ 0: We note that gf(x) features a distance-indepen-

dent attraction to a frontal neighbor in [rrep, ratt], reflecting thefollow-the-leader tendency.

Weighting of Interaction Forces. From Eqs. 5–8, the magnitudes off→i; rep, f

→i;att and f

→i; front are all bounded above by 1 BLs−2, whereas

the magnitude of f→i;al is 1 BLs−2. To allow for different relative

contributions of forces, we define the total effective force onindividual i as the weighted sum

f→i; int ¼ ω rep f

→i; rep þ ω att f

→i;att þ ω al f

→i;al þ ω front f

→i; front;

where the weighting parameters ωrep, ωal, ωatt, and ωfront are to befitted using a goodness-of-fit criterion and optimization routine.

Parameters. Parameters in the full model are categorized into twogroups: those that are fixed (either determined from data orhaving negligible effect on model output), and those that are freeparameters, fitted to the model.

Fixed Parameters. The radii of zones of repulsion, alignment, at-traction, rrep, ral, and ratt, are estimated directly from data. We fixrrep = 1.45 BL, to match the mean observed nearest-neighbordistance. We estimate ral = 3 BL, based on Fig. 2B. This is ap-proximately twice the distance of minimal deviation (1.45 BL)and corresponds to a threshold mean deviation value of ≈11°.We choose ratt = 5 BL as an estimation of maximal interaction

radius. Model results are insensitive to this width because of theupper limit on the number of interacting neighbors. The angularwidth of the frontal interaction zone is taken to be θ ± 30°.Autonomous propulsion a→ is chosen to be a vector of the form

(0, a) with a = 0.5 BLs−2 as a reference parameter against whichother relative force magnitudes are defined.Equilibrium speed in the A/R/A model is then

v0 ¼ aþ ωal

γ; [9]

and so the drag coefficient γ is determined by a and ωal by match-ing equilibrium speed to mean speed calculated from empiricaldata (2.0 BL/s). Fixed parameters are summarized in Table S1.

Free Parameters. We define the relative magnitude of noise ωξ,

such that ξ→i ¼ ωξ

bξ→i; where bξ→i has mean 0 and SD 1. The relativeweights ωrep, ωatt, ωal, ωfront, ωξ are then free parameters. We usean optimization algorithm to find the set of parameters whichprovide the best fit to the observed data.

Specific Models Tested. Basic attraction-repulsion. Corresponding toFig. 3A, we set ωal = 0, so that f

→i; int ¼ ωrepf

→i; rep þ ωattf

→i;att; where R

and ATT are as in Fig. S8A Left. Model predictions (blue curvesin Fig. S8) have a more sharply defined set of radial peaks in thedistribution of neighbors (Fig. S8A Center), and a more uniformangular density (Fig. S8A Right), than the data (green lines inrespective panels).A/R/A. Corresponding to Fig. 3B, we set f

→i; int ¼ ωrepf

→i; repþ

ωattf→i;att þ ωalf

→i;al; where zones are given in Fig. S8B Left. There is

a slight improvement in the predicted radial and angularneighbor densities (Fig. S8B) but not yet an adequate match ofthe sharp transition from low to high density observed in theangular density data. Extensive search in parameter space failedto improve this fit.A/R/A with blind angle. Corresponding to Fig. 3C, we modify theA/R/A model to include a blind angle of ±45°, (Fig. S8C: nointeraction with neighbors in the black sector). There is littleeffect on the radial neighbor distribution (Fig. S8C Center), butthe angular neighbor density now disagrees qualitatively with thedata (Fig. S8C Right).A/R/A with angular weighting. Corresponding to Fig. 3D, we modifythe A/R/A model by weighting attraction and repulsion inter-actions by relative angle. Thus, repulsion and attraction arewritten as

f→i; att ¼ 1

natt∑natt

j¼1

exp�wcos

�θij��

expðwÞ g�dij�u→ij;  x

→j ∈ ATT: [10]

f→i; rep ¼ 1

nrep∑nrep

j¼1

exp�wcos

�θij��

expðwÞ g�dij�u→ij;  x

→j ∈ R; [11]

Here, θij is the angle subtended by the relative position vector u→ijand the velocity vector v→i (e.g., a neighbor directly in front cor-responds to θij = 0). There is similarly poor fit of the angulardensity distribution, despite extensive parameter space search.Further, the model predicts an elliptical structure of neighbordensity, in contrast to the circular structure observed in the data.As a result, angular density (Fig. S8D Right) does not matchobservations.A/R/A with frontal interaction. Corresponding to Fig. 3E, (not shownin supporting information), we add the frontal interaction forceto the A/R/A model, giving

f→i; int ¼ ω rep f

→i;rep þ ω att f

→i;att þ ω al f

→i;al þ ω front f

→i;front:

Interaction zones, radial neighbor distribution, angular neighbordistribution, and parameters are given in Fig. 4 and agree wellwith data.

Optimization. Model fitness was evaluated via a comparison withobserved angular and spatial neighbor distributions. The goodness-of-fit measure

Lukeman et al. www.pnas.org/cgi/content/short/1001763107 3 of 9

Page 4: Supporting Information - StFXpeople.stfx.ca/rlukeman/PNASLukemanSI.pdf · v is the vertical dimension of the image sensor. Our own calibration using grids (Fig. S2C) found the true

E ¼Dðρdataðx; yÞ � ρsimðx; yÞÞ2

EþDðhdataðθÞ � hsimðθÞÞ2

E;

[12]

was used, where ρdata and ρsim are the observed and simulated 2Dspatial distributions of neighbors, respectively, and hdata and hsim arethe observed and simulated angular variation in neighbors. ⟨·⟩ de-notes an average (2D and one-dimensional, respectively, in Eq. 12).Parameter bounds for the ωk were established by manual in-

spection of model output over a wide range of values, and theseranges were, in turn, explored via a random-search algorithm. Fora chosen model, 1,600 unique parameter sets were randomlygenerated within established ranges, and, for each, a set of 20simulations was run to calculate E. Parameter sets generatingminimal values of E were used to successively shrink searchranges, until an optimal set of ωk was found.

Distinguishing Front vs. Back Interactions. Fig. 2 A and B revealseveral axes of symmetry in heat maps, including front-back sym-metry. Accordingly, we asked whether this represents symmetry ofpairs of ducks with interactions predominately frontwards, rear-wards, or some combination of the two. We found two types ofcircumstantial evidence in support of the first options: (i) move-ment to the foraging area is initiated by frontmost ducks andspreads back. This suggests a follow-the-leader interaction. (ii) Westudied the angular deviation of pairs of ducks relative to the meanfield direction in their neighborhood.We found that individuals tooclose behind a neighbor had a higher deviation from mean fieldthan those too close in front. Although the deviation response isa likely a combination of avoidance of individuals in back (toavoid aggressive “nipping”) and avoidance of individuals in front(to avoid collisions), it appears that the dominant response is tothe front.

Fig. S1. (A) Sample raw image of duck flock, showing a ”vacuole” forming around an encroaching kleptoparasitic gull (Larus glaucescens). (B) The gull attacksa nearby surf scoter 3 s later

Fig. S2. Obtaining positions. (A) A schematic diagram of the experimental setup. θ = 27.2° is the camera axis angle, ϕ = 13.1° is the angle-of-view of thecamera lens. bϕ represents the angle corresponding to real distance y (in pixels), and L is the real vertical extent of the image (in real pixels, found to be 1,421).(B) The horizontal calibration matrix, associating each (x, y) image coordinate with the horizontal real pixel value. (C) The calibration grid used to test theimage transformation. The pixel location of the upper-right corner of each grid vertex was marked and transformed in MATLAB (D). (D) Forty-two grid pointsmarked in C (x’s), with reconstructed positions (o’s). The real-world grid is recovered by the vertical and horizontal transformations.

Lukeman et al. www.pnas.org/cgi/content/short/1001763107 4 of 9

Page 5: Supporting Information - StFXpeople.stfx.ca/rlukeman/PNASLukemanSI.pdf · v is the vertical dimension of the image sensor. Our own calibration using grids (Fig. S2C) found the true

Fig. S3. Sample raw images of duck flocks leaving after a foraging bout (A) and arriving for a new foraging bout (B). These images were processed andtransformed, then analyzed via particle-tracking software to obtain positions and velocities for all flock members (C and D, respectively).

Fig. S4. (A) Standard tracking software results in partially incorrect and missing individual velocity vectors (gray), whereas individuals are highly polarized inactuality. (B) After applying our modified predictor-corrector tracking algorithm, the velocity of group members is more accurate and consistent.

Lukeman et al. www.pnas.org/cgi/content/short/1001763107 5 of 9

Page 6: Supporting Information - StFXpeople.stfx.ca/rlukeman/PNASLukemanSI.pdf · v is the vertical dimension of the image sensor. Our own calibration using grids (Fig. S2C) found the true

Fig. S5. Sample trajectories (blue) for four groups, exhibiting highly polarized behavior (A and B), convergence (C), and splitting (D). Starting positions areplotted in green; final positions in red.

Fig. S6. (A) We used the natural scoter excreta as intrinsic fluid tracers to quantify and correct for drift current. The center-of-mass of the tracer is estimated ineach frame, and an approximate average velocity is calculated over successive frames. (B) Current vector x value cx vs. tracer drift speed (cx, obs), from rawimages, with linear least-squares fit (blue) (slope m = 1.02), showing agreement in the two methods (alignment/velocity differences and tracer velocity) toquantify currents.

Fig. S7. (A) Attraction-repulsion function g(x) is negative in the repulsion region R, zero in the alignment region AL, and positive in the attraction region ATT.Magnitudes of attraction and repulsion are controlled independently by weighting parameters ωatt and ωrep. (B) Attraction-repulsion function gf(x) (usedin f

→i;front expression) is negative up to rrep, and positive and constant beyond rrep (up to a limit of ratt). The magnitude of gf(x) is controlled by the weighting

parameter ωfront.

Lukeman et al. www.pnas.org/cgi/content/short/1001763107 6 of 9

Page 7: Supporting Information - StFXpeople.stfx.ca/rlukeman/PNASLukemanSI.pdf · v is the vertical dimension of the image sensor. Our own calibration using grids (Fig. S2C) found the true

Fig. S8. Additional information for models tested in Fig. 3. Left summarizes the type of model. Center is a comparison of radial neighbor density for data(green) and model (blue). (Right) A comparison of angular neighbor density is shown for data (green) and model (blue) in an annular region centered at thepreferred distance (1.45 BL), (where 90° corresponds to the front/back, and 180° to the left/right). In all models, a = 0.5, ωrep = 8.5, ωatt = 1.2. Specific modelsare attraction-repulsion (parameters as above, with ωal = 0, ωξ = 0.3) (A), A/R/A (ωal = 0.74, ωξ = 0.37) (B), A/R/A with a blind angle (parameters as before, butwith ωξ = 0.2) (C), and A/R/A with angular weighting (ωξ = 0.08) (D). D Inset shows the weighting function over [ – π, π], where 0 corresponds to the front.Additional information for the fifth model (A/R/A with frontal interaction, Fig. 3E) is shown in Fig. 4.

Lukeman et al. www.pnas.org/cgi/content/short/1001763107 7 of 9

Page 8: Supporting Information - StFXpeople.stfx.ca/rlukeman/PNASLukemanSI.pdf · v is the vertical dimension of the image sensor. Our own calibration using grids (Fig. S2C) found the true

Table S1. Summary of fixed parameters

Parameter Definition Value Source

rrep Radius of repulsion zone 1.45 BL Dataral Radius of alignment zone 3 BL Data (estimated)ratt Radius of attraction zone 5 BL Chosenθ Angular width of frontal zone ±30° Dataa Autonomous propulsion 0.5 BLs−2 Chosen (reference)γ Drag coefficient (ωal + a)/2 s−1 Data (via Eq. 9)

Movie S1. This movie is a series of raw JPEG images, taken at 3 fps, at the field data site (Canada Place, Vancouver). This movie shows a flock of surf scotersapproaching the dock, then diving in the vicinity of the dock to collectively forage on mussels. Real-time duration is ≈30 s.

Movie S1 (MOV)

Lukeman et al. www.pnas.org/cgi/content/short/1001763107 8 of 9

Page 9: Supporting Information - StFXpeople.stfx.ca/rlukeman/PNASLukemanSI.pdf · v is the vertical dimension of the image sensor. Our own calibration using grids (Fig. S2C) found the true

Movie S2. This movie (constructed as in Movie S1) shows a smaller subgroup of scoters converging, then meeting a larger group returning from a dive. Thesmaller group subsequently merges with the larger group. Real-time duration is ≈45 s.

Movie S2 (MOV)

Movie S3. This movie (constructed as in Movie S1) shows a flock returning to open water after a foraging dive. Real-time duration is ≈15 s.

Movie S3 (MOV)

Lukeman et al. www.pnas.org/cgi/content/short/1001763107 9 of 9