28
NATIONAL GEOSPATIAL-INTELLIGENCE AGENCY 18.1 Small Business Innovation Research (SBIR) Proposal Submission Instructions GENERAL INFORMATION The National Geospatial-Intelligence Agency has a responsibility to provide the products and services that decision makers, warfighters, and first responders need, when they need it most. As a member of the Intelligence Community and the Department of Defense, NGA supports a unique mission set. We are committed to acquiring, developing and maintaining the proper technology, people and processes that will enable overall mission success. Geospatial intelligence, or GEOINT is the exploitation and analysis of imagery and geospatial information to describe, assess and visually depict physical features and geographically referenced activities on the Earth. GEOINT consists of imagery, imagery intelligence and geospatial information. With our unique mission set, NGA pursues research that will help guarantee the information edge over potential adversaries. Information on NGA’s SBIR Program can be found on the NGA SBIR website at: https://www.nga.mil/Partners/ResearchandGrants/SmallBusinessInnovation Research/Pages/default.aspx . Additional information pertaining to the National Geospatial-Intelligence Agency’s mission can be obtained by viewing the website at http://www.nga.mil/ . Inquiries of a general nature or questions concerning the administration of the SBIR program should be addressed to: National Geospatial-Intelligence Agency Attn: SBIR Program Manager, RA, MS: S75-RA 7500 GEOINT Dr., Springfield, VA 22150-7500 Email: [email protected] For technical questions and communications with Topic Authors, see DoD Instructions, Section. 4.15. For general inquiries or problems with electronic submission, contact DoD SBIR Help Desk at [email protected] or 1-800-348-0787 between 9:00 am and 6:00 pm ET. NGA - 1

NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

  • Upload
    ngoque

  • View
    215

  • Download
    1

Embed Size (px)

Citation preview

Page 1: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

NATIONAL GEOSPATIAL-INTELLIGENCE AGENCY 18.1 Small Business Innovation Research (SBIR)

Proposal Submission Instructions

GENERAL INFORMATION

The National Geospatial-Intelligence Agency has a responsibility to provide the products and services that decision makers, warfighters, and first responders need, when they need it most. As a member of the Intelligence Community and the Department of Defense, NGA supports a unique mission set. We are committed to acquiring, developing and maintaining the proper technology, people and processes that will enable overall mission success.

Geospatial intelligence, or GEOINT is the exploitation and analysis of imagery and geospatial information to describe, assess and visually depict physical features and geographically referenced activities on the Earth. GEOINT consists of imagery, imagery intelligence and geospatial information.

With our unique mission set, NGA pursues research that will help guarantee the information edge over potential adversaries. Information on NGA’s SBIR Program can be found on the NGA SBIR website at: https://www.nga.mil/Partners/ResearchandGrants/SmallBusinessInnovationResearch/Pages/default.aspx. Additional information pertaining to the National Geospatial-Intelligence Agency’s mission can be obtained by viewing the website at http://www.nga.mil/.

Inquiries of a general nature or questions concerning the administration of the SBIR program should be addressed to:

National Geospatial-Intelligence AgencyAttn: SBIR Program Manager, RA, MS: S75-RA7500 GEOINT Dr., Springfield, VA 22150-7500Email: [email protected]

For technical questions and communications with Topic Authors, see DoD Instructions, Section. 4.15.For general inquiries or problems with electronic submission, contact DoD SBIR Help Desk at [email protected] or 1-800-348-0787 between 9:00 am and 6:00 pm ET.

PHASE I PROPOSAL INFORMATION

Follow the instructions in the DoD SBIR Program BAA for program requirements and proposal submission instructions at http://www.acq.osd.mil/osbp/sbir/solicitations/index.shtml.

NGA has developed topics to which small businesses may respond to in this fiscal year 2018 SBIR Phase I iteration. These topics are described on the following pages. The maximum amount of SBIR funding for a Phase I award is $100,000 and the maximum period of performance for a Phase I award is nine months. While NGA participates in the majority of SBIR program options, NGA does not participate in the either the Fast Track, Commercialization Pilot, Discretionary Technical Assistance (DTA) or Phase II Enhancement programs.

The entire SBIR proposal submission (consisting of a Proposal Cover Sheet, the Technical Volume, Cost Volume, and Company Commercialization Report) must be submitted electronically through the DoD SBIR/STTR Proposal Submission system located at https://sbir.defensebusiness.org/ for it to be evaluated.

NGA - 1

Page 2: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

The Proposal Technical Volume must be no more than 20 pages in length. The Cover Sheet, Cost Volume and Company Commercialization Report do not count against the 20-page Proposal Technical Volume page limit. Any Technical Volume that exceeds the page will not be considered for review. The proposal must not contain any type smaller than 10-point font size (except as legend on reduced drawings, but not tables).

Selection of Phase I proposals will be in accordance with the evaluation procedures and criteria discussed in this BAA (refer to section 6.0 of the BAA).

The NGA SBIR Program reserves the right to limit awards under any topic, and only those proposals of superior scientific and technical quality in the judgment of the technical evaluation team will be funded. The offeror must be responsive to the topic requirements, as solicited.

An unsuccessful offeror has 3 days after notification that its proposal was not selected to submit a written request for a debriefing to the CO. Those offers who get their written request in within the allotted timeframe above will be provided a debriefing.

Federally Funded Research and Development Contractors (FFRDC) and other government contractors, whom have signed Non-Disclosures Agreements, may be used in the evaluation of your proposal. NGA typically provides a firm fixed price level of effort contract for Phase I awards. The type of contract is at the discretion of the Contracting Officer.

Phase I contracts will include a requirement to produce one-page monthly status reports and a more detailed interim report not later than 7 ½ months after award. These reports shall include the following sections:

A summary of the results of the Phase I research to date A summary of the Phase I tasks not yet completed, with an estimated completion date for each

task A statement of potential applications and benefits of the research.

The interim report shall be no more than 750 words long. The report shall be prepared single spaced in 12 pitch Times New Roman font, with at least a one inch margin on top, bottom, and sides, on 8 ½” by 11” paper. The pages shall be numbered.

PHASE II GUIDELINES

Phase II is the demonstration of the technology found feasible in Phase I. All NGA SBIR Phase I awardees from this BAA will be allowed to submit a Phase II proposal for evaluation and possible selection.

Small businesses submitting a Phase II Proposal must use the DoD SBIR electronic proposal submission system (https://sbir.defensebusiness.org/). This site contains step-by-step instructions for the preparation and submission of the Proposal Cover Sheets, the Company Commercialization Report, the Cost Volume, and how to upload the Technical Volume. For general inquiries or problems with proposal electronic submission, contact the DoD SBIR/STTR Help Desk at (1-800-348-0787) or Help Desk email at [email protected] (9:00 am to 6:00 pm ET).

NGA SBIR Phase II Proposals have four Volumes: Proposal Cover Sheets, Technical Volume, Cost Volume and Company Commercialization Report. The Technical Volume has a 40-page limit including: table of contents, pages intentionally left blank, references, letters of support, appendices, technical portions of subcontract documents (e.g., statements of work and resumes) and any attachments. Do not

NGA - 2

Page 3: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

include blank pages, duplicate the electronically generated Cover Sheets or put information normally associated with the Technical Volume in other sections of the proposal as these will count toward the 40-page limit.

Technical Volumes that exceed the 40-page limit will be reviewed only to the last word on the 40th page. Information beyond the 40th page will not be reviewed or considered in evaluating the offeror’s proposal. To the extent that mandatory technical content is not contained in the first 40 pages of the proposal, the evaluator may deem the proposal as non-responsive and score it accordingly.

The NGA SBIR Program will evaluate and select Phase II proposals using the evaluation criteria in Section 8.0 of the DoD SBIR Program BAA. Due to limited funding, the NGA SBIR Program reserves the right to limit awards under any topic and only proposals considered to be of superior quality will be funded. NGA typically provides a cost plus fixed fee contract as a Phase II award. The type of contract is at the discretion of the Contracting Officer.

Phase II proposals shall be limited to $1,000,000 over a two-year period with a Period of Performance not exceeding 24 months. A work breakdown structure that shows the number of hours and labor category broken out by task and subtask, as well as the start and end dates for each task and subtask, shall be included.

Phase II contracts shall include a requirement to produce one-page monthly status and financial reports, an interim report not later than 10 months after contract award, a prototype demonstration not later than 23 months after contract award and a final report not later than 24 months after contract award. These reports shall include the following sections:

A summary of the results of the Phase II research to date A summary of the Phase II tasks not yet completed with an estimate of the completion date for

each task A statement of potential applications and benefits of the research.

The report shall be no more than 750 words long. The report shall be prepared single spaced in 12 point Times New Roman font, with at least a one inch margin on top, bottom, and sides, on 8 ½” by 11” paper. The pages shall be numbered.

USE OF FOREIGN NATIONALS

See the “Foreign Nationals” section of the DoD program announcement for the definition of a Foreign National (also known as Foreign Persons).

ALL offerors proposing to use foreign nationals MUST disclose this information regardless of whether the topic is subject to export control restrictions. Identify any foreign citizens or individuals holding dual citizenship expected to be involved on this project as a direct employee, subcontractor, or consultant.  For these individuals, please specify their country of origin, the type of visa or work permit under which they are performing and an explanation of their anticipated level of involvement on this project.  You may be asked to provide additional information during negotiations in order to verify the foreign citizen’s eligibility to participate on a SBIR contract.  Supplemental information provided in response to this paragraph will be protected in accordance with the Privacy Act (5 U.S.C. 552a), if applicable, and the Freedom of Information Act (5 U.S.C. 552(b)(6)).

NGA - 3

Page 4: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

Proposals submitted with a foreign national listed will be subject to security review during the contract negotiation process (if selected for award). If the security review disqualifies a foreign national from participating in the proposed work, the contractor may propose a suitable replacement. In the event a proposed foreign person is found ineligible to perform proposed work, the contracting officer will advise the offeror of any disqualifications but may not disclose the underlying rationale.

NGA - 4

Page 5: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

NGA SBIR 18.1 Topic Index

NGA181-001 Topographic Feature Extraction from ground-based still and video imageryNGA181-002 Cloud computing architecture for next generation videoNGA181-003 Suppression of false alarms in Automated Target Recognizers that use Machine

LearningNGA181-004 Automated Assessment of Urban Environment Degradation for disaster relief and

reconstructionNGA181-005 Deriving Uncertainty Estimates for Automated ObservationsNGA181-006 Generalized Change Detection to Cue Regions of InterestNGA181-007 Video to Feature Data Association and GeolocationNGA181-008 Blending Ground View and Overhead ModelsNGA181-009 Miniaturization of Neural Networks for Geospatial DataNGA181-010 Low-Shot Detection in Remote Sensing Imagery

NGA - 5

Page 6: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

NGA SBIR 18.1 Topic Descriptions

NGA181-001 TITLE: Topographic Feature Extraction from ground-based still and video imagery

TECHNOLOGY AREA(S): Information Systems

OBJECTIVE: Fully automate the process of extracting topographic features, such as ridge lines, peaks, skylines, and river banks, from imagery taken at near-ground levels, showing landscapes, hills, mountain ranges, and/or other topographic features.

DESCRIPTION: NGA has an interest in knowing the earth, whether from satellite imagery, or imagery readily available from cameras and video cameras collecting imagery at or close to the earth’s surface. With the increasing resolution of consumer-grade cameras and video systems, it becomes possible to extract relevant topographic features with high accuracy from a wide variety of imagery sources. Today, analysts use manual and semi-automated methods to demark ridge lines and other features in imagery in order to collect topographic information. A fully automated approach that requires no human intervention, nor seed points, nor manual “clicks” is highly desired.

Much research has been done on automating or providing semi-automated capabilities of this type (see references); this topic desires a transitionable fully automated system for use by government and commercial applications.

A fully automated system would first examine whether an image contains relevant topographic features, such as skyline, or distant ridge lines. It would examine video data to see if extraction can be improved by processing spatio-temporal blocks of data, as opposed to single frames. If the imagery is suitable, it would produce digital representations of topographic features, without any human intervention, labeling the features according to the type.

At a minimum, automated extraction of skyline under varying meteorological conditions, as well as distant ridge lines in the foreground (with higher mountains or peaks behind the ridge line) with varying land cover types, is required. Other topographic features can add additional information about topography, and provide more precision in terms of geolocation of features. Thus offerers might want to suggest other feature types whose extraction they propose to fully automate.

Various confounding factors should be considered, including the possibility of partial obscuration of features, buildings and towers that might get in the way, clouds and low-lying fog, varying conditions within a video segment, and range estimates so as to discard near features where perspective transformations cause the occlusion line mismatch with the actual ridge of the topography.

Proposed evaluation criteria, and methods to obtain ground-truthed imagery for both development and testing, should be specified in the proposal.

PHASE I: Identify features to be extracted, identify example data-sets with ground truth information, and design an approach to automated feature extraction, with proof of concept demonstrations.

PHASE II: Obtain development data, and separately testing data, develop algorithmic methods to fully automate the extraction of topographic features to include skylines and ridgelines, and evaluate performance under a variety of operating conditions. Generate and/or collect a wide body of data for training and testing, expanding the set of target types and objects, including clutter data, and develop

NGA - 6

Page 7: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

performance figures under varying operating conditions and varying sensor types, taking into account possible confounding factors.

PHASE III DUAL USE APPLICATIONS: NGA has an immediate need for automated capabilities as described, which can be expanded to include other feature types. Dual use applications include needs by cartographers, surveyors, land use developers, and others who can make use of these capabilities. Developers might consider licensing of software, but might also consider providing services against large databases of imagery supplied by commercial companies.

REFERENCES:1. “Horizon Lines in the Wild”, Scott Workman, Menghua Zhai, Nathan Jacobs

2. “Joint Semantic Segmentation and Depth Estimation with Deep Convolutional Networks” A. Mousavian, H. Pirsiavash

3. “Transient Attributes for High-Level Understanding and Editing of Outdoor Scenes” PY Laffont, Z. Ren, X. Tao, C. Qian, J. Hays

4. “Learning Deep Features for Scene Recognition Using Places Database”, B. Zhou, A. Lapedriza, J. Xiao, A. Torralba

5. “Recognizing Landmarks in Large-Scale Social Image Collections”, D.J. Crandall, Y. Li, S. Lee, D.P. Huttenlocher

6. “Predicting Good Features for Image Geo-Localization Using Per-Bundle VLAD”, Hyo Jin Kim, Enrique Dunn, Jan-Michael Frahm; The IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1170-1178

KEYWORDS: Image processing, algorithm, Infrared, IR, sensors

TPOC-1: Duncan McCarthyPhone: 571-557-6240Email: [email protected]

NGA181-002 TITLE: Cloud computing architecture for next generation video

TECHNOLOGY AREA(S): Information Systems

OBJECTIVE: Engineer solutions to address enterprise cloud adoption and distributed computer processing challenges for current and next-generation video content, consistent with mixed media formats and wide viewer and machine applications.

DESCRIPTION: Consistent with recent direction from the Deputy Secretary of Defense, "Accelerating Enterprise Cloud Adoption" (Issued 2017/11/07), National Geospatial Intelligence Agency seeks next-generation technologies to integrate full motion video (FMV) and other media into a scalable, distributed framework with ingest, storage, and streaming content delivery capabilities. The technical solution should enable deep integration with distributed processing applications to include by both human and emerging machine automation technologies, while maintaining link integrity between the data source(s)

NGA - 7

Page 8: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

and processed results. The technology should be agnostic to media formats to support both legacy and emerging media formats, as well as voice, text, non-video sensor data, and other annotations. The approach should include the benefits of modern networks, such as Netflix-style storage, streaming and distribution, while avoiding its pitfalls when applied to the government solution space. Example features which go beyond consumer requirements are frame-accurate scrubbing and annotations, distributed storage of a single asset based on network topology constraints, delayed data fusion allowing multiple bitrates and alternative data types to auto-fuse at any time, globally unique marker points independent of asset start time that work with partial assets and parallel distributed computer processing for live or video on demand data streams.

PHASE I: Provide a report containing a specific technical and architectural approach that solves the challenges described in this topic. Specific examples of the approach matching the feature set of the most recent advanced streaming media networks as well as specifics on how it will solve other challenges described in this topic.

PHASE II: Implement the proposed approach and run real world simulations showing each of the features. Solution must show how it works in hybrid networking models, such as when distribution of a real-time asset from one location to another is delayed, allowing only parts of the asset, such as a low bitrate version, to be distributed and the other parts to be fused later. Show how distributed processing can take place on one element, such as the audio or metadata tracks, while the resulting data is automatically fused to other elements of the asset. Provide simulations of how parallel processing would be possible enabling algorithms to process a single asset across a compute cluster which is accessible over a network and not directly connected to the asset storage system.

PHASE III DUAL USE APPLICATIONS: Network and commercial video content providers, including coverage of sports, special events, and “breaking news” requiring remote sensing are examples of cases where video capture can use dynamic collection strategies. As consumer use of “video on demand” increases and local storage is increasingly replaced by cloud services, commercial video content distribution can and will depend increasingly on efficient mechanisms for transmitting and caching video data, and can make use of technologies that are inspired by the defense problem space.

REFERENCES:1. Albert Y. Zomaya, Albert Y. (2014). "Advanced Content Delivery, Streaming and Cloud Services". Wiley

2. Adaptive bitrate streaming. In Wikipedia. en.wikipedia.org/wiki/Adaptive_bitrate_streaming

KEYWORDS: FMV, streaming, sensor data, big data, data fusion, distributed computing, adaptive streaming

NGA181-003 TITLE: Suppression of false alarms in Automated Target Recognizers that use Machine Learning

TECHNOLOGY AREA(S): Human Systems, Information Systems

NGA - 8

Page 9: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

OBJECTIVE: Develop methods to suppress false alarms in Automated Target Recognition systems that use machine learning to recognize designated objects.

DESCRIPTION: Over the past decade, improvements to machine learning techniques have enabled them to eclipse the performance of other computer vision algorithms for object recognition. In numerous competitions, such as the ImageNet Large-Scale Visual Recognition Challenge (1) as well in production software (on Google+, for instance), convolutional neural networks (CNNs) have proven to be a useful method to perform object recognition on large databases (2). These approaches can be used to extract specific vehicle types from broad overhead imagery (3), whether provided by electro-optical (EO) imagery, or synthetic aperture radar (SAR) imagery.

Automatic Target Recognition (ATR) plays a critical role in identifying targets and reducing operator workload, however, there are increasing challenges that modern ATR algorithms must address. False alarms have historically presented challenges for ATR algorithms, and systems based on machine learning techniques to date are still subject to excessively high rates of false alarms. A further challenge for SAR ATR is that target types are growing into hierarchies that are more complex (4). Variants to vehicle classes continue growing as militaries modernize older machines, and deploy increasing numbers of variants in increasingly wide varieties of environments. Observational environments vary from open fields to dense urban terrain where clutter and obscurations further affect target recognition capabilities, and the opening of the aperture of environments can expect to result in greater numbers of false alarms when systems use primarily the image data constrained to the local chip that is ingested by the recognition engine. While neural network approaches such as CNNs are easy to implement and have yielded superior performance when trained, there is a strong need to implement methods to reduce false alarm rates that would otherwise require analyst review to dismiss objects or coincidences that are not among the target types.

An approach to suppress false alarms might use greater contextual information (5), models of likely deployment, mensuration, fusion and/or verification by reference to other imagery sources (which might include other modalities). Adversarial learning might be used to implement any or all of these approaches, but would need to demonstrate a maintained level of detection without a great deal of manual effort to negate false alarms.

PHASE I: Implement or collect neural network –trained ATRs, and identify a method and demonstrate the feasibility of suppressing false alarms through a pilot study.

PHASE II: Implement and evaluate efficient algorithmic processes to suppress false alarms in large image datasets. Test on a larger sample of (potentially classified) government-provided data. Measure performance enhancements in terms of precision and recall, or appropriate metrics that would correlate with workload requirements that would improve analyst efficiency by concentrating their efforts on objects of true intelligence value.

PHASE III DUAL USE APPLICATIONS: A computer vision system that is robust to false object detections could significantly impact a broad range of military and civilian applications; examples include car counting, maritime safety of navigation, automatic map updating, and disaster response.

REFERENCES:1. Russakovsky, O., et al. ImageNet Large Scale Visual Recognition Challenge. arXiv 1409.0575v3, 2015

NGA - 9

Page 10: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

2. Ren, S., He, K., Girshick, R., Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. arXiv 1506.01497v3, 2016

3. Mundhenk, T., Konjevod, G., Sakla, W., Boakye, K. A Large Contextual Dataset for Classification, Detection and Counting of Cars with Deep Learning. arXiv 1609.04453, 2016

4. Zamir, A., Wu, T., Shen, W., Malik, J., Savarese, S. Feedback Networks. arXiv 1612.09508, 2016

5. Zagoruyko, S., Lerer, A., Lin, T., Pinheiro, P., Gross, S., Chintala, S., Dollar, P. A MultiPath Network for Object Detection. arXiv 1604.02135v2, 2016

KEYWORDS: Automatic Target Recognition, ATR, False Alarms, ROC curves, clutter suppression

NGA181-004 TITLE: Automated Assessment of Urban Environment Degradation for disaster relief and reconstruction

TECHNOLOGY AREA(S): Information Systems

OBJECTIVE: To develop algorithms that fuse observables from over flight operations (which could include panchromatic, IR, SAR, and MSI data) with data collected from ground sources (such as video from vehicles and drone cameras, LiDAR, and other sources), to automatically estimate the degradation of an urban environment due to battle or natural damage.

DESCRIPTION: Assessing areas of urban damage due to wars, shelling, or natural disasters is currently a manual labor-intensive process. These processes result in estimates of degradation that are incomplete with low confidence, and typically operate at the scale of an entire city or village. Other approaches such as bomb damage assessment (BDA) look at single targets in order to determine a single object has been sufficiently degraded. Neither approach addresses the problem of time-dominant assessment required for disaster relief and emergency reconstruction purposes. The increasing availability of commercial satellite and aerial sources (such as unmanned aerial vehicles deployed by news sources) might permit rapid access to essential information to enable automated assessment of damage. In the same way, existing ground sensors, and ground-based collection from first responders can provide rapid response information. These sources can supplement overhead sources and provide more accurate assessments.

It is desirable to exploit these sources of data as rapidly as possible. Among the output of interest is an assessment of whether roads and passages are traversable, and whether individual buildings and residences are safe enough for entry or habitation. Automated analysis of fused observables from overhead and ground data would enable 3D assessment that would lead to more accurate estimates with higher confidence at the level of individual buildings, sections of roads and passageways, as well as assessments of blocks, villages, and cities.

PHASE I: Create a proof-of-concept demonstrating the feasibility of fusing observables from overhead imagery (panchromatic, multispectral, IR, SAR, MSI) and ground collection (full motion video--FMV, LiDAR, stills) data collected to generate an estimate of urban environment degradation based on 3D renderings of damage to structures.

NGA - 10

Page 11: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

PHASE II: Develop a prototype system that demonstrates the improved accuracy of automated 3D assessments of urban environment degradation through fusion of multiple data sources. This prototype system should demonstrate the differences in accuracy of assessments using fused observables from both overhead and ground data collected compared to solely overhead or solely ground data, and show how multiple data sources improves accuracy compared to single source data. The prototype will be able to provide different levels of 3D assessments as real-world observables from overhead and ground collects may vary.

PHASE III DUAL USE APPLICATIONS: These algorithms would be applicable for generating rapid estimates for assessing the extent of terrestrial battle space environment degradation due to air and ground operations, but would also be applicable to 3D automated urban environment assessment for application to disaster management, mitigation, and reconstruction. Damage from wide-area disasters such as hurricanes, tornados, and earthquakes, as well as floods and volcanoes, are facilitated by both overhead and ground data collected to yield an accurate estimation of how the environment has been degraded. These estimates would inform restitution efforts for prioritization and safe ingress/egress. Commercial efforts have largely focused on using solely overhead data or manual ground observation. Using fused observables from a broader range of both overhead and ground could save lives by speeding recovery efforts.

REFERENCES:1. Voigt, S. et al. "Rapid Damage Assessment and Situation Mapping: Learning from the 2010 Haiti Earthquake", Photogrammetric Engineering & Remote Sensing, Sept. 2011, p. 923-931.

2. Dong, L., Shan, J. "A Comprehensive Review of Earthquake-induced Building Damage Detection", ISPRS 2013, 84, p. 85-99, Elsevier.

3. He, M. et al. "A 3D Shape Descriptor Based on Contour Clouds for Damaged Roof Detection Using Airborne LiDAR Point Clouds", Remote Sensing 2016, 8, 189.

KEYWORDS: Automated analytics, 3D assessments, fused observables, urban environment degradation, automated damage assessment

NGA181-005 TITLE: Deriving Uncertainty Estimates for Automated Observations

TECHNOLOGY AREA(S): Human Systems, Information Systems

OBJECTIVE: Develop novel ways to assess uncertainty in observations of objects in overhead imagery that were identified by automated systems. Design aggregation techniques that can identify important spatial groupings while considering the uncertainty estimates for the primitives of those groupings.

DESCRIPTION: Recent advances in deep learning have made object recognition and localization systems applicable to a wider variety of tasks. These techniques have started to be applied to overhead imagery; this provides great promise for automated recognition of objects for analysts. As such, this topic has two research components: (1) to enhance an object detection network to be able to describe its own certainty in its identifications; and (2) to develop methods to aggregate observations based on models that consider individual objects’ uncertainty.

NGA - 11

Page 12: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

For (1), the standard technique for these recognition systems is to use algorithms to classify objects into a pre-defined set of categories. For instance, a network that has been trained to identify cats and dogs, given an image of a chair, will categorize it as a cat or a dog without providing a measure of certainty in its judgment. However, analysts always provide measures of their confidence in their judgments. Automated identification of objects in overhead imagery will require methods to produce measures of uncertainty and to process multiple observations to reduce uncertainty.

For (2), the research will need to provide example methods and models to identify salient groups of objects and classify activities based on a reasoned, a priori knowledge-base of a given environment codified in the model. Object configurations can suggest a larger class of objects. For instance, rows of cars in a linear formation indicate a parking lot. Configurations of objects can also indicate activity; e.g., a crane over a cargo container ship indicates active on- or off-loading. Spatial data aggregation that employs certainty estimates should be able to ingest models that are easily represented in order to increase accuracy and applicability of judgments suggested by automated systems.

The National Geospatial-Intelligence Agency searches for novel approaches that combine these two concepts: uncertainty estimates and spatial object grouping. An object recognition system requires certainty estimates in order to provide the necessary context for future reasoning tasks that analysts currently conduct as a uniquely human task. Spatial data aggregation from an automated recognition system must understand how certain that system is of a particular judgment in order to make its own assessment of the grouping or activity of those objects.

PHASE I: Demonstrate an ability to obtain uncertainty estimates in neural architectures applied to overhead imagery. Design proof-of-concept for algorithmic approaches to object aggregation and/or group activity assessment to include uncertainty information.

PHASE II: Demonstrate an aggregation/activity identification algorithm applied to datasets of images and pre-defined object identifications with certainty estimates on the identifications. Jointly with the government develop test metrics for performance evaluation of uncertainty measures and aggregation techniques.

PHASE III DUAL USE APPLICATIONS: This system could be used for a broad range of military and civilian applications where the presence of particular objects in groups indicate a particular activity – for example, in monitoring of shipping or commercial business operations.

REFERENCES:1. Yarin Gal and Zoubin Ghahramani. Bayesian convolutional neural networks with Bernoulli approximate variational inference. arXiv 1506.02142v5, 2016.

2. Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation appendix. arXiv 1506.02157, 2015.

3. Alex Graves. Practical variational inference for neural networks. In Advances in Neural Information Processing Systems 24 25th Annual Conference on Neural Information Processing Systems 2011.

4. D.J.C. MacKay. A practical Bayesian framework for backpropagation networks. Neural Computation, 4(3):448-472, May 1992.

5. John William Paisley, David M. Blei, and Michael I. Jordan. Variational Bayesian inference with stochastic search. In Proceedings of the 29th International Conference on Machine Learning, ICML,

NGA - 12

Page 13: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

2012.

KEYWORDS: computer vision; overhead imagery; machine learning; spatial data; aggregation

NGA181-006 TITLE: Generalized Change Detection to Cue Regions of Interest

TECHNOLOGY AREA(S): Information Systems

OBJECTIVE: Using frequent revisit imagery, determine when a significant change has occurred that warrants further analysis, using a representation of a model of significance that can be specified on a sliding scale.

DESCRIPTION: Overhead imagery, such as small satellite imaging systems, will soon be able to provide imagery of the entire planet on a regular recurring basis (daily, or better). It will not be possible for human analysts to look at but a tiny fraction of this imagery. Pixel-based change detection based on registration of subsequent images of the same terrain can indicate changes, but most changes are not significant, and instead are due to lighting changes, minor misregistration, and normal variations of no significance. When known objects are sought, automated recognition systems might be able to detect changes in the presence of objects of interest in the latest image, to cue further collection or analysis, provided the objects of interest are sufficiently large. However, to detect situations that are not expected but are unusual, and have particular value to the user, a means to cull potential images based on significant changes from prior collects is needed.

This topic is based on a hypothesis that it should be possible to devise algorithms, or to train an algorithmic process, to detect changes that have significance, with a false alarm rate that might be high, but nonetheless can reduce the analytic load requiring further analysis by orders of magnitude. As a cuing technique, it is desirable that the detection rate be very high. Significant changes are not likely to involve large areas of changes in brightness or color, but rather more likely involve confluence of large numbers of vehicles, or presence of a large vehicle where one is not expected, or initiation of a new construction site or clearing of a field to change the type of coverage, or grading for construction of a road or railway, or any of a number of other changes that could be of interest. Typically, the cuing of further analysis will trigger the collection of new imagery at higher resolution, or collection of other evidence, at the designated location. The further analysis might involve automated systems applied to that new evidence, or might require a human analyst to look at the totality of data at that site. Regardless of how the output is to be used, the goal is to dismiss massive amounts of imagery where no significant intelligence can be gleaned, so as to reduce collection and analysis workloads.

The significance of changes depends on the purpose one has in examining the imagery in the first place. Ideally, the system can be trained to discern significant changes by being trained against a carefully curated set of examples of changes that indicate the kinds of changes that the user seeks. The method for discerning changes might be dependent on the geolocation of the imagery, providing the method can be codified in relatively few parameters, or the method might be dependent on a characterization of the type of terrain represented by the imagery (field, urban, forest, etc.). Whatever method is proposed, it should incorporate a sliding scale that allows one to specify the degree of significance of the desired changes, thereby sacrificing detection rate for greater culling of less significant changes.

NGA - 13

Page 14: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

PHASE I: Develop an approach and provide evidence of its viability to detect significant change detection through experiments using test pairs of before/after images that include both significant change and lack of significant change.

PHASE II: Develop an efficient implementation and evaluate performance characteristics according to well-defined metrics to detect significant change and to cull large amounts of imagery data.

PHASE III DUAL USE APPLICATIONS: As overhead and airborne imagery become more common, commercial as well as military applications will require automated systems to sample out regions of interest, dependent on the particular mission and needs of the user. An ability to train a system to extract imagery data of relevance to the user could become a service offered to organizations with geospatial intelligence requirements.

REFERENCES:1. I. Niemeyer, M. Canty, “Pixel-based and Object-oriented change detection analysis using high resolution imagery,” Research Gate https://www.researchgate.net/publication/228409864_Pixel-based_and_object-oriented_change_detection_analysis_using_high-resolution_imagery (accessed 1/12/2017), Jan 2003.

2. S. Xiaolu, C. Bo, “Change Detection Using Change Vector Analysis from Landsat TM Images in Wuhan,” Procedia Environmental Sciences, Vol 11A, 2011, 238-244, Science Direct http://www.sciencedirect.com/science/article/pii/S1878029611008607, (Accessed 1/12/2017).

3. H. Shin, M. Orlon, D. Collins, S. Doran, M. Leach, “Stacked autoencoders for unsupervised Feature Learning and Multiple Organ Detection in a Pilot Study Using 4D Patient Data,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) Vol. 35(8), pp. 1930-1943, Aug 2013, http://ieeexplore.ieee.org/document/6399478

KEYWORDS: Change detection, autoencoding, imagery analysis, imagery culling, geospatial-intelligence

NGA181-007 TITLE: Video to Feature Data Association and Geolocation

TECHNOLOGY AREA(S): Information Systems

OBJECTIVE: Using video data (potentially 360-degree video) from sensors on ground vehicles with approximate geolocation information, develop algorithms to use foundational features (such as building footprints, corners of road intersections, and other labeled objects) to establish associations between segments in the observed imagery and object features, and thereby greatly improve the time-varying geolocation accuracy of the sensor and to expand the number and quality of the associations.

DESCRIPTION: Navigation systems use GPS and other approaches to obtain approximate geolocation, which can be used to retrieve foundation data in maps, such as exists in Open Street Maps or Google Map, among many other public domain sources. However, GPS and other radio sources can drop out, or be denied, resulting in inaccurate geolocation, and potentially causing navigation errors. Yet, finding associations between observed objects in the environment and features that have been pre-collected and stored in maps or geographic information systems can be used to refine estimates of geolocation, maintain

NGA - 14

Page 15: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

tracks and “propriocentricity,” and assist in fine-precision geolocation. Depending on the richness of the foundation features, it might be possible to navigate precisely and with high speed, if one supplements the location services with obstacle avoidance.

This topic is concerned with finding associations between objects observed in imagery, collected as a time sequence, and foundation data that is practically and reasonably available. That foundation data may include building footprints, descriptions of the buildings or shops, 3-D descriptions of the buildings, labeled features such as road signs, fire hydrants, road layouts, and other information that would normally be used by a human to help navigate the environment, either based on consultation with a map, or from memory. It is assumed that the foundation data is accurate, and that points within the map are accurately geolocated (in advance).

An automated system that can efficiently find such associations would be able to greatly improve its geolocation accuracy, using calibrated camera parameters, based on associations of points in the map data. This would facilitate navigation at higher speeds in more complex environments.

While lidar sensors or radio frequency sensors might be able to further improve the ability of the system to find associations, this topic is concerned with finding associations using EO video cameras. Further, while cross-correlations with “street view” data might also assist in find associations, the feature data is assumed to consist of typical map data, which can include points, vectors and shapes, and will not include “street view” images.

PHASE I: Identify and collect sample data sets for proof-of-concept and future testing purposes, and demonstrate viability of approaches to automated extraction of points and segments for finding associations, and improving geolocation accuracy improvement if associations can be found.

PHASE II: Develop algorithms and software to access feature foundation data, establish associations between observed data and features, improve geolocation accuracy, and extend and improve associations based on those improvements, and test and evaluate the prototype system.

PHASE III DUAL USE APPLICATIONS: Apart from military applications that assist Special Forces, peacekeeping forces, and other to navigate in unfamiliar territory, vendors of navigation systems for vehicles, smart phones, and mobile devices for vehicular and personal navigation will benefit from efficient, lightweight software capable of providing high accuracy geolocation accuracy even when GPS sources are inadequate.

REFERENCES:1. H. Eugster and S. Nebiker. UAV-Based Augmented Monitoring--Real-Time Georeferencing and Integration of Video Imagery with Virtual Globe. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXVII, Part B1. Beijing 2008

2. Keith Yu Kit Leung, Christopher M. Clark, Jan P. Huissoon, “Localization in urban environments by matching ground level video images with an aerial image,” IEEE International Conference on Robotics and Automation, 2008

KEYWORDS: Computer Vision, autonomous vehicle navigation, dead-reckoning, spatial data

NGA - 15

Page 16: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

NGA181-008 TITLE: Blending Ground View and Overhead Models

TECHNOLOGY AREA(S): Human Systems, Information Systems

OBJECTIVE: Develop techniques that use information from deep models trained on ground-view images to learn representations of similar objects for overhead models. Demonstrate the ability to teach a neural network an object class for overhead imagery from ground-view examples.

DESCRIPTION: The current approach for automated classification and detection of objects in imagery is to create a large labeled dataset that can be used to train models. However, creating training datasets is expensive, time-consuming, and imprecise. It is not feasible to create a large labeled dataset for every example of object that is in need of detection by an automated system. This is particularly true in overhead geospatial domains where the amount and quality of labeled data are low. There are, however, large high quality labeled datasets for ground view images like ImageNet, MS COCO, etc. NGA seeks advances in research that leverage the representation of objects in these ground view models to train classifiers and detectors in overhead imagery. An essential problem facing overhead geospatial detection systems is a lack of training data. The geospatial community should be able to leverage the innovation brought by the ground based community to speed the development of overhead models. NGA would like an algorithm that is capable of leveraging images of an object taken from the ground to train a model to detect that object in overhead imagery. This algorithm should be take less time to train with less data than a similar algorithm trained with a large overhead corpus. While the Government may furnish data for this project, proposals should be written to publically available datasets accessible on the internet.

PHASE I: Research techniques to accomplish cross-view transfer learning of novel object classes. Demonstrate initial instance of successful learning procedure.

PHASE II: Develop further refined techniques that require less data and achieve higher performance metrics. Create software package that implements basic research techniques discovered through this work.

PHASE III DUAL USE APPLICATIONS: These developments would be applicable to a variety of commercial applications that seek innovative and efficient ways to leverage overhead data for automated discovery of objects; examples of applications include financial markets interested in geospatial indicators, security providers with novel signatures in need of discovery, and agricultural modelers with needs to identify data trends.

REFERENCES:1. DigitalGlobe, CosmiQ Works, NVIDIA, and Amazon Web Services team up to launch Spacenet open data initiative. http://investor.digitalglobe.com/phoenix.zhtml?c=70788&p=irol-newsArticle&ID=2197375.

2. Hani Altwaijry, Eduard Trulls, James Hays, Pascal Fua, and Serge Belongie. Learning to match aerial images with deep attentive architectures, 06 2016.

3. Tianqi Chen, Ian J. Goodfellow, and Jonathon Shlens. Net2net: Accelerating learning via knowledge transfer. CoRR, abs/1511.05641, 2015.

NGA - 16

Page 17: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

4. Tsung-Yi Lin, Yin Cui, Serge Belongie, and James Hays. Learning deep representations for ground-to-aerial geolocalization. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.

5. Nam N. Vo and James Hays. Localizing and orienting street views using overhead imagery. CoRR, abs/1608.00161, 2016.

KEYWORDS: computer vision; overhead imagery; transfer learning; machine learning; cross-view representation

NGA181-009 TITLE: Miniaturization of Neural Networks for Geospatial Data

TECHNOLOGY AREA(S): Human Systems, Information Systems

OBJECTIVE: Apply and develop techniques in miniaturizing neural networks to models specifically designed for geospatial data. Identify and quantify accuracy loss, speed improvements, and size savings any models that are studied.

DESCRIPTION: As neural networks and deep learning continue to show improvements on standard benchmark datasets, these models will become more utilized by the geospatial community. In the literature on neural networks, there is an area of study that focuses on the miniaturization of these models. The goal of this process is to approximate a given high-performing and complex model with another model that is smaller and can run faster while also not having a significant decrease in performance accuracy. Much of this work exists with the goal of embedding these deep models on mobile systems. Many of the most advanced technology that exists on mobile phones today are a result of these advances. NGA seeks the application and development of these miniaturization techniques to overhead imagery models. There have been recent advances in the creation of geospatial benchmarking datasets with the creation of labeled overhead imagery corpora such as Cars Overhead with Context and Spacenet. Models that have performed well on these datasets are based on existing neural networks, modified and trained on these datasets. A natural parallel to these advances is to inquire about the ability to mobilize these high-performing models, but in a way in which considers the geospatial aspects of their application. If methods are applied and developed in Phase I to effectively shrink the neural network model while speeding up inference and not impacting accuracy on standard geospatial datasets, NGA would like to understand the effects of those efficiencies on a cloud computing infrastructure that supports automated tipping to imagery analysts. The research from this SBIR would be influential in determining NGA’s future architecture for serving analysts with the outputs of neural networks in a fast and scalable environment.

PHASE I: Research and identify various methods to take existing neural networks and miniaturize them for standard overhead imagery datasets. Report on the trade-offs to the techniques researched and developed for a comparison of the original network with the miniaturized one.

NGA - 17

Page 18: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

PHASE II: Develop a method to scale these techniques to a broader class of neural models for geospatial data. Demonstrate the benefits of miniaturized geospatial networks in distributed systems.

PHASE III DUAL USE APPLICATIONS: These methods would be useful to a broad range of commercial and civil applications that desire efficient, high-performing inference models that are able to be run in a computing constrained environment; examples include overhead imagery analytics platforms, law enforcement, and earth systems monitoring.

REFERENCES:1. Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and Huffman coding. CoRR, abs/1510.00149, 2015.

2. G. Hinton, O. Vinyals, and J. Dean. Distilling the Knowledge in a Neural Network. ArXiv e-prints, March 2015.

3. Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. CoRR, abs/1704.04861, 2017.

4. Forrest N. Iandola, Matthew W. Moskewicz, Khalid Ashraf, Song Han, William J. Dally, and Kurt Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and 1mb model size. CoRR, abs/1602.07360, 2016.

KEYWORDS: computer vision; overhead imagery; mobile networks; machine learning; optimization

NGA181-010 TITLE: Low-Shot Detection in Remote Sensing Imagery

TECHNOLOGY AREA(S): Battlespace, Electronics, Information Systems, Sensors

OBJECTIVE: Develop novel techniques to identify and locate uncommon targets in overhead and aerial imagery, specifically when few prior examples are available. Initial focus will be on panchromatic electro-optical (EO) imagery with a subsequent extension to multi-spectral imagery (MSI) and/or synthetic aperture radar (SAR) imagery.

DESCRIPTION: The National Geospatial Intelligence Agency (NGA) produces timely, accurate and actionable geospatial intelligence (GEOINT) to support U.S. national security. To continue to be effective in today's environment of simultaneous rapid growth in available imagery data and number and complexity of intelligence targets, NGA must continue to improve its automated and semi-automated methods. Of particular concern is furthering NGA's ability to identify and locate uncommon intelligence targets in large search regions.

Recent advances in computer vision due to deep learning have dramatically improved the state¬ of-the-art in techniques such as object detection and semantic segmentation, to include scenarios where little data is available for training (i.e., low-shot). While these approaches have shown encouraging results on NGA problems, little research has been performed which is specific to the remote sensing domain: a deficiency which this effort aims to address.

NGA - 18

Page 19: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

It is the intention of the government to provide labeled data (pending availability and releaseability) to assist with algorithm development and capability demonstration; however, the performer is encouraged to identify and use imagery from other sources as well. Baseline performance will be established by a comparison against a state-of-the-art detection network (e.g., Feature Pyramid Networks, YOLO9000) pre-trained on ImageNet or COCO. Performance evaluation will be conducted using metrics derived from precision-recall curves.

PHASE I: Address deep learning low-shot detection in panchromatic EO with consideration for extension to MSI and/or SAR in Phase II. Phase I will result in a proof of concept algorithm suite for low-shot detection and thorough documentation of conducted experiments and results in a final report.

PHASE II: Extend Phase I capabilities to MSI and/or SAR to include researching methodologies to simultaneously exploit multiple modalities and/or develop zero-shot capabilities (i.e., no prior available examples). Develop enhancements to address identified deficiencies from Phase I or those identified when processing MSI and/or SAR data. Deliver updates to the proof of concept algorithm suite and technical reports. Phase II will result in a prototype end-to-end implementation of the Phase I low-shot detection system extended to process MSI and/or SAR imagery and a comprehensive final report.

PHASE III DUAL USE APPLICATIONS: Technology enabling the automated search for uncommon objects in overhead imagery would be widely applicable across the government and commercial sectors. Military applications include national security, targeting, and intelligence. Commercially, it will apply to urban planning, geology, anthropology, economics, and search and rescue; and all other domains that benefit from identifying objects in overhead imagery.

REFERENCES:1. Hariharan B. and Hirshick R. Low-shot Visual Object Recognition by Shrinking and Hallucinating Features. arXiv preprint arXiv:1606.02819v2. 2016 Nov 30.

2. Wang Y. and Hebert M. Combining low-density separators with CNNs. In Advances in Neural Information Processing Systems 2016 (pp.244-252).

3. Lin T. et al., Feature Pyramid Networks for Object Detection. arXiv preprint arXiv:1612.03144, 2016 Dec 9.

4. Xie M., Jean N., Burke M., Lobell D., Ermon S. Transfer Learning from deep features for remote sensing and poverty mapping. arXiv preprint arXiv:lSl0.00098. 2015 Oct 1.

5. Shrivastava A et al. Learning from Simulated and Unsupervised Images through Adversarial Training. arXiv preprint arXiv: 1612.07828. 2016 Dec 22.

6. Bourmalis K, Silberman N, Dohan D, Erhan D, Krishnan D. Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks. arXiv preprint arXiv: 1602.05424. 2016 Dec 16.

7. Redmon J, Farhadi A. YOLO9000: Better, Faster, Stronger. arXiv preprint arXiv preprint arXiv: 1612.08242. 2016 Dec 25.

8. Lin TY, et al. Microsoft COCO: Common Objects in Context. In European Conference on Computer Vision 2014 2014 Sept 6 (pp. 740-755). Springer International Publishing.

NGA - 19

Page 20: NATIONAL IMAGERY AND MAPPING AGENCY - acq.osd.mil Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... with varying land cover types,

9. Deng J, et al. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on 2009 Jun 20 (pp. 248-255). IEEE.

KEYWORDS: computer vision, machine learning, deep learning, detection, segmentation, low shot learning, image processing

NGA - 20