https://www.darpa.mil/DDM_Gallery/visar_619x316.jpg / (Image Credit: DARPA)

https://www.darpa.mil/DDM_Gallery/visar_619x316.jpg / (Image Credit: DARPA)

Posted: June 22, 2018 | Completed: June 19, 2018 | DTIC Accession No.: DSIAC-2184019 | By: Scott E. Armistead, Bradley DeRoos, Powers Garmon, Ph.D.
What new sensing technologies, methodologies, and algorithms exist to detect and identify ground targets from the air?

 

The Defense Systems Information Analysis Center (DSIAC) received a technical inquiry requesting information on new sensing technologies, methodologies, and algorithms for air-to-ground target tracking. DSIAC staff and subject matter experts (SMEs) reviewed publicly available documentation and knowledge repositories (e.g., the Institute of Electrical and Electronics Engineering Journal and conference proceedings, Georgia Tech Research Institute [GTRI] research reports, Tri-Service Radar Symposium proceedings, etc.) for information on various sensing technologies, methodologies, and algorithms to prepare its response. DSIAC collated inputs from 4D Tech Solutions, GTRI, and DSIAC SMEs on visual, thermal, acoustic, light detection and ranging, radar, biometric, and facilitating technologies for detecting and identifying personnel and small vehicle targets from airborne platforms. DSIAC staff analyzed the information gathered and provided the inquirer with an overview of technologies, systems, and/or individuals relevant to the inquiry.

 


1.0  Introduction

Defense Systems Information Analysis Center (DSIAC) staff searched various repositories for documents relevant to the inquiry and collated inputs from 4D Tech Solutions, Inc., Georgia Tech Research Institute (GTRI), and DSIAC subject matter experts (SMEs) on visual, thermal, acoustic, light detection and ranging (LiDAR), radar, biometric, and facilitating technologies for detection and identification (ID) of personnel and small vehicle targets from airborne platforms. DSIAC staff reviewed and analyzed the search results and SME inputs to provide a summary of technologies, systems, or individuals related to the inquiry.  DSIAC found that a variety of new applications are being developed to track objects (e.g., humans) as they move on the ground; however, there was no indication that the technology has advanced sufficiently to biometrically track individual human targets using light tactical or utility vehicles in a mounted or unmounted position.

 


2.0  Personnel/Biometric Tracking from Small UAVs

Mr. Brad DeRoos, President of 4D Tech Solutions, Inc., provided an assessment of current technical capabilities to perform airborne tracking and ID of dismounted/mounted personnel and small tactical vehicles.  His company specializes in providing military and commercial autonomous and biometrics technology solutions for developing the following:

  • Unmanned aerial vehicles (UAVs), unmanned ground vehicles, and unmanned surface vessels.
  • Modular UAV architecture for payload testing.
  • Advanced UAV flight planning software and photogrammetric imaging techniques for three-dimensional (3-D) mapping.
  • Infrared and two-wavelength digital holography biometric equipment.

Tracking individuals on the ground from UAVs has become a key offering of many small UAV manufacturers, mainly due to interest in taking pictures of themselves from the air (e.g., selfies or action videos).  This type of cooperative/semi-cooperative tracking is much different than performing personnel tracking from a UAV when the individual being tracked is not compliant.  These simple personnel tracking systems increasingly use items, such as a locator or global positioning system (GPS)-derived information from a cellphone or stand-along GPS communicator, to provide effective tracking.  DroneGuru provides a list of “follow me” drones [1].  These drones require interface and compliance with the drone (i.e., UAV) doing the tracking.

The ability to perform biometric tracking is a subset of personnel tracking.  Based on a review of the article “Army Tracking Plan:  Drones that Never Forget a Face” [2], it is apparent that visually and thermally tracking individuals is still a challenge.  Biometric-based tracking presents several challenges, including the issues that pose and gait analysis capture angles create regarding the ID of prey. The pose angle is important for two-dimensional biometric facial image matching (visual and thermal/multispectral), and the capture angles are important for gait analysis.  Non-biometric tracking must be reliably solved as a precursor to biometric tracking.  The concept of drones tracking an individual based on biometrics was presented as early as 2011, but no progress appears to have been made by the company that suggested the idea (i.e., Progeny Systems Corporation).

 

2.1  Visual Tracking

Research is being performed to extend the capabilities of commercial off-the-shelf technologies for individual/pedestrian tracking from UAVs. Recent technical advances in UAVs made a realm of applications possible. In the 2015 publication, “On-Board Real-Time Tracking of Pedestrians on a UAV” [3], the researchers focused on “the application of following a walking pedestrian in real time, using optimized pedestrian detection and object tracking.” For this, they used an on-board embedded system, offering an optimal ratio of computational power and weight. They extended the commonly used ground plane estimation technique, which can be used to reduce the search space, based on the sensor data off the UAV. The integration of the ground plane constraint obtains a significant speed-up over the already optimized aggregate channel feature detector. To compensate for the frames without detections, they used a particle tracker based on color information. The tracking algorithms were successfully validated from a flying UAV.

As identified in the documents “UAV-Based Monitoring of Pedestrian Groups” [4] and “Visual Object Tracking for Unmanned Aerial Vehicles:  A Benchmark and New Motion Models” [5], tracking pedestrian groups is a potentially plausible means by which UAV tracking of individuals on the ground can occur.

One commercialized technology of interest is that which is embedded within the Skydio R1 drone [6].  The R1 control system is based on a fully integrated, end-to-end, autonomous software stack (i.e., Skydio Autonomy Engine [7]) that is powered by a NVIDIA Jetson TK 256 compute unified device architecture (CUDA) core artificial intelligence (AI) supercomputing device [8].  Skydio R1 provides complex real-time video imagery analytics and integration with time-space-position information for fully autonomous launch, flight planning, maneuvering, target tracking, and landing.  The R1’s app software can also be used to launch and land, control the system, and preset certain filming and flying conditions [6–8].

The Skydio R1’s control, vision, navigation, and data processing/analytics systems integrate advanced algorithm components spanning perception, planning, and control, which gives the R1 a unique intelligence that’s analogous to how a person would navigate an environment.  On the perception side, the system uses computer vision from 13 cameras that capture omnidirectional video to determine the location of objects. Using a deep neural network, it compiles information on each object and identifies individuals or vehicles by size, shape, color, clothing, etc.  For each target, it builds up a unique visual ID to tell people and vehicles apart so that it stays focused on the right one [6–8].

That data feeds into a motion-planning system, which pinpoints a targets location and predicts upcoming movements. It also considers its own maneuvering limits to continually trade off and balance parameters to optimize tracking and filming. This predictive mode allows the drone to “lead” targets as they maneuver through difficult terrain.

The system’s use of deep neural networks, deep learning, and machine learning allow it to build up unique visual identifiers and perform discrimination between different people and object IDs (see Figures 1–3).  This allows the system to stay locked onto a single target and even switch between individuals as necessary (e.g., follow the trail of a package passed from individual to individual).

DSIAC contacted SOFWERX and verified that they have a Skydio R1 in house for testing purposes.

 

Figure 1:  (Left) 13 Cameras Enabling Full 360 ° Vision, Continuous Target Tracking, and Obstacle Avoidance [9, 10] and (Right) NVIDIA Jetson TK CUDA-Core AI Supercomputer With Fully Integrated Autonomous Software Stack [10].

 

Figure 2:  Simultaneous Localization and Mapping to Construct and Update an Environment Map While Keeping Track of a Target’s Location Within [9].

 

Figure 3:  (Left) Deep Learning to Develop Unique Visual Identifiers, Improve Behavior, and Become More Capable Over Time and (Right) 3-D Environment Mapping, Maneuvering Limitations, and Target Movement Prediction to Flight Plan [11].

2.2  Thermal Tracking

Detection and tracking people in visible-light images has been subject to extensive research in past decades, with applications ranging from surveillance to search-and-rescue. Following the growing availability of thermal cameras and the distinctive thermal signature of humans, research efforts have been focusing on developing people detection and tracking methodologies applicable to this sensing modality.  Many challenges arise in transitioning from visible light to thermal images, especially with the recent trend of employing thermal cameras on aerial platforms. The following papers present the challenges of tracking individuals from UAVs using thermal cameras:

  • “People Detection and Tracking From Aerial Thermal Views” [12].
  • “Real-Time People and Vehicle Detection From UAV Imagery” [13].
  • “Pedestrian Detection and Tracking From Low-Resolution Unmanned Aerial Vehicle Thermal Imagery” [14].
  • “Persistent Visual Tracking and Accurate Geo-Location of Moving Ground Targets by Small Air Vehicles” [15].

 


3.0  Tracking Light Tactical and Utility Vehicles from Small UAVs

 

3.1  Enhancing UAV Target Detection and Tracking with Real-Time Self-Learning

UAVs have been widely used for commercial and surveillance purposes in the recent year. Vehicle tracking from aerial video is one commonly used application. In the paper “Robust Vehicle Tracking and Detection From UAVs” [16], a self-learning mechanism is proposed for vehicle tracking in real time. The main contribution of this paper is that the proposed system can automatically detect and track multiple vehicles with a self-learning process, enhancing tracking and detection accuracy. Two methods were used to detect vehicles—(1) the features from the accelerated segment test with the histograms of oriented gradients method and (2) the hue, saturation, and value color feature with grey-level, co-occurrence matrix method. A forward and backward tracking mechanism was employed for vehicle tracking. The main purpose of the research effort was to increase vehicle detection accuracy by using the tracking results and the learning process, which can monitor detection and tracking performance by using their outputs. Videos captured from UAVs were used to evaluate the performance of the proposed method. According to the results, the proposed learning system can increase the detection performance.

 

3.2  Enhancing UAV Navigation With Target Motion Estimation

Small UAVs are increasingly popular in many applications for their low cost, ease of use, and rapid deployment. One highly desirable capability is pursuing a target along a roadway while providing a persistent aerial view. This is made difficult by large UAV state uncertainty and limited maneuverability. However, technical papers indicate that by carefully representing state uncertainty, planning in the space of available control actions, and using environmental knowledge to constrain possible target motion, adequate performance can be achieved with existing fielded vehicles. The first reference paper, “Persistent Visual Tracking and Accurate Geo-Location of Moving Ground Targets by Small Air Vehicles” [15], presents a comparative study of several target estimation and motion planning techniques. Results from field experiments applying these ideas to a commercial human-portable, fixed-wing UAV system are also presented in the document “Intelligent Motion Video Guidance for Unmanned Air System Ground Target Surveillance” [17].

 


4.0  Tracking UAVs from Small UAVs

Drones usually cannot be tracked by conventional radar systems due to their small size. To solve the problem in the civilian sector, Vodafone created a fourth generation network subscriber identity module card that makes drones visible on air traffic control systems and allows operators greater control if they go off course.  This radio positioning system can track a drone in real time with up to 50-m accuracy by the operator and authorized bodies like air traffic control. It can force a drone to land automatically or return to the operator if it approaches excluded zones like airports and prisons.  It also has an emergency override function.

Also related to the civilian sector tracking of drones, Chinese company DJI recently introduced its own Wi-Fi-based drone ID and tracking system for use by the Federal Aviation Administration and law enforcement.  DJI’s “Aeroscope” will operate on the 2.4- and 5.8-GHz Wi-Fi bands and broadcast each drone’s position, altitude, direction, speed, make, model, serial number, and any additional ID information that pilots provide [18].

Tracking small UAVs from the ground is problematic and even more so if detection and tracking occurs from the air due to sensor payload weight and size.  This research identified three technologies that can potentially track UAVs from UAVs.

 

4.1  Acoustic Tracking

Acoustic detection and tracking UAVs is attractive, as the technology can be inexpensive and lightweight.  The ability to track a UAV using tracking technology deployed from another UAV would need to be assessed [19–21].

 

4.2  Light Detection and Ranging (LiDAR) Tracking

Figure 4 shows images taken from a small LiDAR system developed by the U.S. Army Research Laboratory (ARL) in Adelphi, MD.  Small LiDAR sensors can offer very high-resolution images out to a range of over 500 ft.  The sensor has a laser pulse repetition frequency of up to 400,000 pulses per second and a controllable scan rate which would allow detecting other small UAVs and tracking them as well [22].

 

Figure 4:  Images from ARL LiDAR System [22].

 

4.3  Radar Tracking

Phased-array radars use a grid of antennas that can steer a radar beam in a desired direction by emitting radio waves in precisely defined patterns. By doing this multiple times per second, users can scan the beam over a whole field of view (FOV) without ever moving the device itself. However, the radar devices have proven to be difficult to miniaturize beyond a certain point because of the amount of electronics involved (e.g., the antennas must be a certain size to work with a given wavelength, their controllers take up space, there is processing for received data, etc.).

A phased array technology that may prove useful for tracking UAVs from UAVs is being developed by a company named Echodyne.  It uses metamaterial to create a phased array on a significantly smaller scale. Instead of dozens of individual antennas, the device uses a surface with a carefully engineered 3-D pattern that allows beams to propagate in a similar manner but with more precision and lower power.  It sweeps across a maximum of 120° horizontal and 80° vertical (azimuth and elevation), an FOV roughly equivalent to humans. These systems are meant to replace the vigilance of a pilot, onboard or remote. An effective replacement for human visual acuity and search capability is a prerequisite for autonomous flight outside the operator’s line of sight [23].

The U.S. Army has a goal of mounting fire-control radar and cameras on UAVs to detect and track hostile drones.  The purpose is to create small, man-portable UAVs that troops can use on the spot to detect and track hostile drones rather than having to call in aircraft or other support to identify the target. The data could then be relayed to friendly weapons that can destroy the airborne intruder.  According to an Army research proposal, “A radar system mounted on a UAV will provide high fidelity radar information to Blue forces on forward observation missions…. The actionable intelligence gathered from this detailed radar track information will allow for timely decisions on how to react to any potential airborne threats. Operators will be able to request visual confirmation from the UAV’s on-board camera system prior to engaging the threat.”  However, the proposal also notes that size, weight, and power considerations will be key to installing fire control radar on a small UAV.  A search was performed of FedBizOpps, but neither a request for information nor the sources sought could be found [24].

 

4.4  Emerging Technologies

Other emerging technologies are maturing that may facilitate implementing the previously mentioned methodologies in effectiveness and cost reduction.  DSIAC provided a few samples of technologies, including the following:

  • ARL face recognition technology [25].
  • Air Force Institute of Technology (AFIT) enhanced imaging system [26].
  • University Centre in Svalbard (UNIS), Norway, small low-cost hyperspectral imagers [27].

 

The first two are developmental technologies that could directly improve detecting and identifying mounted/dismounted personnel. The third shows a developmental technology that could allow implementation on small UAVs if sensitivity were improved.

It is beyond the scope of a DSIAC technical inquiry to perform a comprehensive assessment of the domain in question or associated facilitating technologies and how they might be applied in an airborne operational environment.  However, this could be accomplished under a DSIAC extended technical inquiry or core analysis task, if desired.

 

4.4.1  ARL Face Recognition Technology

U.S. Army researchers developed an AI and machine-learning technique that produces a visible image from a thermal image of a person’s face captured in low-light or nighttime conditions.  This development could lead to enhanced real-time biometrics and post-mission forensic analysis for covert nighttime operations.  The technology enables matching between thermal face images and existing biometric face databases/watch lists that only contain visible face imagery.   It provides a way for humans to visually compare visible and thermal facial imagery through thermal-to-visible face synthesis (see Figure 5). The goal of the program is to enhance both automatic and human-matching capabilities [25].

 

Figure 5:  ARL Conceptual Illustration for Thermal-to-Visible Synthesis for Interoperability with Existing Visible-Based, Facial Recognition Systems [25].

 

4.4.2  AFIT Enhanced Imaging System to Differentiate Human Skin from Other Materials

The AFIT Sensors Exploitation Research Group developed a process for differentiating human skin from other materials within an image to reduce false detection rates in support of search, rescue, and recovery; security; and surveillance operations (see Figure 6).  Instead of using bulky, expensive, and relatively slow hyperspectral camera systems, Dr. Michael J. Mendenhall’s research team developed a prototype camera system that specifically works with a skin detection and color estimation approach. The system requires only a small number of spectral channels. The multispectral camera system enhances skin detection by focusing on the amount of melanin in the skin, allowing filtering of detection for different concentrations of melanin (fair or dark skin or anything in between) to improve the capability of locating a person of interest [26].

 

Figure 6:  AFIT Skin Detection [26].

 

4.4.3  UNIS Small, Low-Cost Hyperspectral Imagers

Researchers used 3-D printing and low-cost parts to create an inexpensive hyperspectral imager that is light enough to use onboard drones. They offer a recipe for creating these imagers, which could make the traditionally expensive analytical technique more widely accessible.  The researchers detail how to make visible-wavelength, hyperspectral imagers weighing less than half of a pound for as little $700. They also demonstrate that the imagers could acquire spectral data from aboard a drone.  Currently, the imagers lack the sensitivity necessary for intelligence, surveillance, and reconnaissance operations from significant altitudes, but this will improve with time [27].

 


5.0  GTRI Search

Dr. Powers Garmon, the principal research scientist at GTRI’s Sensors and Electromagnetic Applications Laboratory, searched GTRI and Institute of Electrical and Electronics Engineering (IEEE) for technical reports, journal publications, and conference proceedings related to the inquiry, resulting in a bibliography of over 110 references.

For more than a decade, a host of GTRI researchers has been funded by the U.S. Department of Defense to perform research in air-to-ground targeting.  This research includes ground moving target indication (GMTI) research that addresses vehicles and dismounts in real-world environments.  The research addresses latest hardware, algorithms, and processing methods.

Unclassified publications authored by GTRI researchers, as well as those not affiliated with GTRI, are provided in the Bibliography.  In addition, GTRI researchers have presented numerous papers related to the inquiry at the Tri-Service Radar Symposium in the past decade; information on this can be provided for valid requests.

 


References

[1] Young, J. “9 Best Drones That Follow You [Crystal Clear Video] 2019.” DroneGuru, http://www.droneguru.net/8-best-drones-that-follow-you-follow-drones/, 11 January 2019.

[2] Shachtman, N. “Army Tracking Plan:  Drones That Never Forget a Face.” Wired, https://www.wired.com/2011/09/drones-never-forget-a-face/, 28 September 2011.

[3] De Smedt, F., D. Hulens, and T. Goedeme. “On-Board Real-Time Tracking of Pedestrians on a UAV.” Semantic Scholar,  https://www.semanticscholar.org/paper/On-board-real-time-tracking -of-pedestrians-on-a-UAV-Smedt-Hulens/9f7c1b794805be34bc2091e02c382c5461e0bcb4, 2015.

[4] Burkert, F., and F. Fraundorfer. “UAV-Based Monitoring of Pedestrian Groups.” UAV-g, http://www.uav-g.org/Presentations/UAV-g2013_Burkert.pdf, 9 April 2013.

[5] Li, S., and D.-Y. Yeung. “Visual Object Tracking for Unmanned Aerial Vehicles:  A Benchmark and New Motion Models.” Proceedings of the 31st AAAI Conference on Artificial Intelligence, https://www.aaai.org/ocs/index.php/AAAI/AAAI17/paper/viewFile/14338/14292, 2017.

[6] Skydio. “Skydio R1:  Introducing Apple Watch App.” https://www.skydio.com/#video-follow, accessed June 2018.

[7] Skydio. “What is the Skydio Autonomy Engine?” https://www.skydio.com/2018/02/what-is-the-skydio-autonomy-engine/, 13 February 2018.

[8] NVIDIA. “Jetson Family.” https://developer.nvidia.com/embedded/develop/hardware, accessed June 2018.

[9] Skydio. “Technology.” https://www.skydio.com/technology/, accessed June 2018.

[10] Aerofly Drones. “Skydio R1 Review:  Autonomous, Self-Flying, and Intelligent 4K Drone.” https://www.aeroflydrones.com/skydio-r1-review/, accessed June 2018.

[11] Skydio. https://www.skydio.com/, accessed June 2018.

[12] Portmann, J., S. Lynen, M. Chli, and R. Siegwart. “People Detection and Tracking From Aerial Thermal Views.” MargaritaChli.com, http://margaritachli.com/papers/ICRA2014paper. pdf, 2014.

[13] Gaszczak, A., T. P. Breckon, and J. Han. “Real-Time People and Vehicle Detection From UAV Imagery.” http://breckon.eu/toby/publications/papers/gaszczak11uavpeople.pdf, accessed June 2018.

[14] Ma, Y., X. Wu, G. Yu, Y. Xu, and Y. Wang. “Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery.” Sensors (Basel), vol. 16, no. 4, p. 446, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4850960/, 26 March 2016.

[15] Dille, M., B. Grocholsky, and S. Singh. “Persistent Visual Tracking and Accurate Geo-Location of Moving Ground Targets by Small Air Vehicles.” Infotech@Aerospace 2011, St. Louis, MO, https://arc.aiaa.org/doi/abs/10.2514/6.2011-1558, 2011.

[16] Chen, X., and Q. Meng. “Robust Vehicle Tracking and Detection From UAVs.” The 2015 7th International Conference of Soft Computing and Pattern Recognition, https://ieeexplore.ieee. org/document/7492814, 16 June 2016.

[17] Valasek, J., K. Kirkpatrick, J. May, and J. Harris. “Intelligent Motion Video Guidance for Unmanned Air System Ground Target Surveillance.” Journal of Aerospace Information Systems, vol. 13, no. 1, https://vscl.tamu.edu/wp-ontent/uploads/sites/134/2017/07/MotionVideo1.pdf, January 2016.

[18] Corfield, G. “Whose Drone Is That? DJI Unveils UAV Traffic Tracking System.” The Register, https://www.theregister.co.uk/2017/10/12/dji_drone_aeroscope/, 12 October 2017.

[19] Benyamin, M., and G. H. Goldman. “Acoustic Detection and Tracking of a Class I UAS With a Small Tetrahedral Microphone Array.” ARL, September 2014. https://www.arl.army.mil/ arlreports/2014/ARL-TR-7086.pdf, accessed June 2018.

[20] Busset, J., F. Perrodin, P. Wellig, B. Ott, K. Heutschi, T. Ruhl, and T. Nussbaumer. “Detection and Tracking of Drones Using Advanced Acoustic Cameras.” Proceedings Volume 9647, Unmanned/Unattended Sensors and Sensor Network XI; and Advanced Free-Space Optical Communication Techniques and Applications, https://www.spiedigitallibrary.org/conference-proceedings-of-spie/9647/96470F/Detection-and-tracking-of-drones-using-advanced-acoustic-cameras/10.1117/12.2194309.short?SSO=1, 13 October 2015.

[21] Case, E. E., A. M. Zelnio, and B. D. Rigling. “Low-Cost Acoustic Array for Small UAV Detection and Tracking.” The 2008 IEEE National Aerospace and Electronics Conference, https://ieeexplore.ieee.org/document/4806528, 27 March 2009.

[22] Stann, B. L., J. F. Dammann, M. Del Giorno, C. DiBerardino, M. M. Giza, M. A. Powers, and N. Uzunovic. “Integration and Demonstration of MEMS-Scanned LADAR for Robotic Navigation.” Proceedings Volume 9084, Unmanned Systems Technology XVI, https://www. spiedigitallibrary.org/conference-proceedings-of-spie/9084/90840J/Integration-and-demonstration-of-MEMS-scanned-LADAR-for-robotic-navigation/10.1117/12.2050687. short?SSO=1, 3 June 2014.

[23] Coldewey, D. “Echodyne’s Pocket-Sized Radar May Be the Next Must-Have Tech for Drones (and Drones Hunters).” Tech Crunch, https://techcrunch.com/2017/05/12/echodynes-pocket-sized-radar-may-be-the-next-must-have-tech-for-drones-and-drone-hunters/, 12 May 2017.

[24] Peck, M. “Army Wants to Mount Counter-UAV Radar on Drones.” C4ISRNet, https:// www.c4isrnet.com/ unmanned/uas/2017/05/16/army-wants-to-mount-counter-uav-radar-on-drones/, 16 May 2017.

[25] ARL. “Army Develops Face Recognition Technology That Works in the Dark.” https://www.arl.army.mil/ www/default.cfm?article=3199, 16 April 2018.

[26] Simison, S. “AFIT research Team Develops Enhanced Imaging System to Aid in Search and Rescue, Security and Surveillance.” Wright-Patterson Air Force Base, https://www.wpafb.af.mil/ News/Article-Display/Article/818590/afit-research-team-develops-enhanced-imaging-system-to-aid-in-search-and-rescue/, 22 January 2016.

[27] The Optical Society of America. “Lightweight Hyperspectral Imagers Bring Sophisticated Imaging Capability to Drones.” https://www.osa.org/en-us/about_osa/newsroom/news_ releases/2018/lightweight_hyperspectral_imagers_bring_sophistica/, 28 February 2018.

 


Bibliography

 

Georgia Tech Research Institute (GTRI) Technical Reports Prepared for the U.S. Air Force Research Laboratory (AFRL)

Bing, K. F., T. M. Selee, and W. L. Melvin. “Mountain Intelligence, Surveillance, and Reconnaissance (MISR).” AFRL-RY-WP-TR-2010, Synergistic Electronic Warfare Program (SWARM), task order 0012, March 2010.

Hersey, R. “Gunship Over Degraded Visual Environments Sensor Investigation (GODSI).” AFRL-RY-WP-TR-2015-0077, SWARM, task order 0001, vol. 2, April 2015.

Hersey, R., P. Garmon, T. Benson, T. Selee, J. Palmer, and J. Reed. “Dismount Radar Mode Systems Engineering Investigation (DRMSEI).” AFRL-RY-WP-TR-2014-0191, SWARM, task order 0013, August 2014.

Selee, T., J. Palmer, P. Garmon, R. Hersey, and A. Foote. “Advanced Signal Exploitation of Relevant Target Dynamics (ASERTD).” AFRL-RY-WP-TR-2015-0077, SWARM, task order 0001, vol. 3, April 2015.

 

Institute of Electrical and Electronics Engineers (IEEE) Journal Articles and Conference Proceedings Published by GTRI

Bales, M. R., T. Benson, R. Dickerson, D. Campbell, R. Hersey, and E. Culpepper. “Real-Time Implementations of Ordered-Statistic CFAR.” The 2012 IEEE Radar Conference, pp. 0896–0901, http://ieeexplore.ieee.org/document/6212264/, 2012.

Benson, T. K., R. K. Hersey, and E. Culpepper. “GPU-Based Space-Time Adaptive Processing (STAP) for Radar.” The 2013 IEEE High Performance Extreme Computing Conference, pp. 1–6, http://ieeexplore.ieee.org/document/6670341/, 2013.

Bruna, M. A., K. F. Bing, and M. Minges. “Airborne Bistatic Radar Trajectory Optimization for Ground Geolocation Accuracy Maximization.” The 2017 IEEE Radar Conference, pp. 1–5, http://ieeexplore.ieee.org/document/7944465/, 2017.

Bruna, M. A., K. F. Bing, and M. Minges. “Cramer-Rao Lower Bound Assessment When Using Bistatic Clutter Mitigation Techniques.” The 2016 IEEE Radar Conference, pp. 1-6, http://ieeexplore.ieee.org/document/7485093/, 2016.

Fertig, L. B., M. J. Baden, J. C. Kerce, and D. Sobota. “Localization and Tracking with Multipath Exploitation Radar.” The 2012 IEEE Radar Conference, pp. 1014–1018, http://ieeexplore.ieee.org/document/6212286/, 2012.

Gürbüz, S. V., D. B. Williams, and W. L. Melvin. “Enhanced Detection and Characterization of Human Targets via Non-Linear Phase Modeling.” The 2010 IEEE Radar Conference, pp. 183–187, http://ieeexplore.ieee.org/document/5494630/, 2010.

Hersey, R. K., and E. Culpepper. “Radar Processing Architecture for Simultaneous SAR, GMTI, ATR, and Tracking.” The 2016 IEEE Radar Conference, pp. 1–5, http://ieeexplore. ieee.org/document/7485076/, 2016.

Hersey, R. K., W. L. Melvin, and E. Culpepper. “Dismount Modeling and Detection From Small Aperture Moving Radar Platforms.” The 2008 IEEE Radar Conference, pp. 1–6, http:// ieeexplore.ieee.org/document/4720724/, 2008.

Hersey, R. K., G. A. Showman, and E. Culpepper. “Clutter-Based Array Calibration for Enhanced Geolocation Accuracy.” The 2013 IEEE Radar Conference, pp. 1–5, http://ieeexplore. ieee.org/document/6586085/, 2013.

Hersey, R. K., D. Bowden, D. Bruening, and L. Westbrook. “Radar Modeling and Validation of Human Gaits Using Joint Motion-Capture and Radar Data Collections.“ Asilomar Conference on Signals, Systems, and Computers, http://ieeexplore.ieee.org/ document/6810425/, November 2013.

Paulus, A. S., W. L. Melvin, and B. Himed. “Performance and Computational Trades for RD-STAP Algorithms in Challenging Detection Environments.” The 2016 IEEE Radar Conference, pp. 1–6, http://ieeexplore.ieee.org/document/7485078/, 2016.

Paulus, A. S., W. L. Melvin, and D. B. Williams. “Improved Target Detection Through Extended Dwell Time Algorithm.” The 2015 IEEE Radar Conference, pp. 0484–0489, http:// ieeexplore.ieee.org/document/7131047/, 2015.

Paulus, A. S., W. L. Melvin, and D. B. Williams. “Multichannel GMTI Techniques to Enhance Integration of Temporal Signal Energy for Improved Target Detection.” The Institution of Engineering and Technology (IET) Radar, Sonar & Navigation, vol. 11, no. 3, pp. 336–350, http://ieeexplore.ieee.org/document/7891280/, 2017.

Paulus, A. S., W. L. Melvin, and D. B. Williams. “Multistage Algorithm for Single-Channel Extended-Dwell Signal Integration.” IEEE Transactions on Aerospace and Electronic Systems, vol. 53, no. 6, pp. 2998–3007, http://ieeexplore.ieee.org/document/7971982/, 2017.

Paulus, A. S., W. L. Melvin, and D. B. Williams. “Multistage Algorithms for Extended Dwell Target Detection.” The 2014 IEEE Radar Conference, pp. 0269–0274, http://ieeexplore.ieee. org/document/6875597/, 2014.

Sangston, K. J., F. Gini, and M. S. Greco. “Adaptive Detection of Radar Targets in Compound-Gaussian Clutter.” The 2015 IEEE Radar Conference, pp. 0587–0592, http://ieeexplore. ieee.org/document/7131066/, 2015.

Selee, T. M., K. F. Bing, and W. L. Melvin. “STAP Application in Mountainous Terrain:  Challenges and Strategies.” The 2012 IEEE Radar Conference, pp. 0824–0829, http://ieeexplore. ieee.org/document/6212251/, 2012.

 

Publications by GTRI Researchers from Miscellaneous Conferences as of 2015

Hersey, R. K., and E. Culpepper. “Development of a Real-Time Dismount Radar Mode (DRMSEI).” National Fire Control Symposium, Las Vegas, NV, August 2012.

Hersey, R. K., and E. Culpepper. “Sub-Band Processing for Grating Lobe Disambiguation in Sparse Arrays.” The International Society for Optical Engineering (SPIE) Defense, Security, and Sensing (DSS), May 2014.

Hersey, R. K., and W. L. Melvin. “Dismount Radar Mode Development.” Defense Advanced Research Projects Agency (DARPA) Dismount Workshop, Boston, MA, 18 October 2010.

Hersey, R. K., B. Hayden, and M. Longbrake. “Gunship Over DVE Sensor Investigation (GODSI).” National Fire Control Symposium, El Segundo, CA, February 2015.

Hersey, R. K., W. L. Melvin, and E. Culpepper. “Dismount Radar Mode Development.” Geospatial Intelligence (GEOINT) Workshop, Dayton, OH, 21 April 2011.

Hersey, R. K., W. L. Melvin, and E. Culpepper. “Dismount Radar Mode Development.” Ground Moving Target Indicator (GMTI) Community of Practice, San Diego, CA, 1 February 2011.

Hersey, R. K., W. L. Melvin, and E. Culpepper. “Dismount Radar Mode Development.” North Atlantic Treaty Organization (NATO) SET Dismount Workshop, Ottawa, Canada, 14–15 September 2011.

Hersey, R. K., W. L. Melvin, E. Culpepper, and M. Bryant. “Dismount Radar Mode Development.” North Atlantic Treaty Organization (NATO) Sensors and Electronics Technology (SET) Military Sensing Symposium, Friedrichshafen, Germany, 16–18 May 2011.

Selee, T., J. Palmer, J. Reed, R. Hersey, D. Bowden, D. Bruening, and L. Westbrook. “Advanced Human, Vehicle, Animal, or Clutter (HVAC) Discrimination.” Automatic Target Recognition Workshop, Huntsville, AL, June 2013.

Selee, T., J. Palmer, J. Reed, R. Hersey, D. Bowden, D. Bruening, and L. Westbrook. “Dismount Discrimination Using Range-Doppler Maps.” Automatic Target Recognition Workshop, Huntsville, AL, June 2013.

Selee, T., J. Palmer, J. Reed, R. Hersey, D. Bowden, D. Bruening, and L. Westbrook. “Exploiting Group Information From Range-Doppler Images.” Automatic Target Recognition Workshop, Huntsville, AL, June 2013.

 

IEEE Journal Articles and Conference Proceedings Published by Non-GTRI Researchers

Baumgartner, S. V., and G. Krieger. “Simultaneous High-Resolution Wide-Swath SAR Imaging and Ground Moving Target Indication:  Processing Approaches and System Concepts.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 11, pp. 5015–5029, http://ieeexplore.ieee.org/document/7172443/, 2015.

Clemente, C., L. Pallotta, I. Proudler, A. De Maio, J. J. Soraghan, and A. Farina. “Pseudo-Zernike-Based Multi-Pass Automatic Target Recognition From Multi-Channel Synthetic Aperture Radar.” IET Radar, Sonar & Navigation, vol. 9, no. 4, pp. 457–466, http://ieeexplore. ieee.org/document/7070594/, April 2015.

Dong, Q., M.-D. Xing, X.-G. Xia, S. Zhang, and G.-C. Sun. “Moving Target Refocusing Algorithm in 2-D Wavenumber Domain After BP Integral.” IEEE Geoscience and Remote Sensing Letters, vol. 15, no. 1, pp. 127–131, http://ieeexplore.ieee.org/document/8187645/, January 2018.

Drozdowicz, J. “Moving Target Imaging Using Dual-Channel High Resolution 35 GHz SAR Radar.” The 2016 17th International Radar Symposium, pp. 1–4, http://ieeexplore.ieee.org/ document/7497310/, 2016.

Drozdowicz, J., M. Watroba, P. Samczynski, and A. Gromek. “Ground Moving Target Indication in High Resolution Synthetic Aperture Radar Imaging—a Comparison of Selected Algorithms.” The 2017 18th International Radar Symposium (IRS), pp. 1–8, http:// ieeexplore.ieee.org/document/8008114/, 2017.

Du, W., Z. Yang, and G. Liao. “Improved Ground Moving Target Indication Method in Heterogeneous Environment With Polarization-Aided Adaptive Processing.” IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 11, pp. 1729–1733, http:// ieeexplore.ieee.org/document/7575657/, 2016.

Ender, J., and R. Kohlleppel. “Knowledge Based Ground Moving Target Detection and Tracking Using Sparse Representation Techniques.” The 2015 16th International Radar Symposium, pp. 374–379, http://ieeexplore.ieee.org/document/7226396/, 2015.

Fan, B., Y. Qin, P. You, and H. Wang. “An Improved PFA With Aperture Accommodation for Widefield Spotlight SAR Imaging.” IEEE Geoscience and Remote Sensing Letters, vol. 12, no. 1, pp. 3–7, http://ieeexplore.ieee.org/document/6853358/, 2015.

Goldstein, J. S., M. L. Picciolo, M. Rangaswamy, and J. D. Griesbach. “Detection of Dismounts Using Synthetic Aperture Radar.” The 2010 11th International Radar Symposium, pp. 209–214, http://ieeexplore.ieee.org/document/5494623/, 2010.

Greenewald, K., E. Zelnio, and A. Hero. “Robust SAR STAP via Kronecker Decomposition.” IEEE Transactions on Aerospace and Electronic Systems, vol. 52, no. 6, pp. 2612–2625, http://ieeexplore.ieee.org/document/7855571/, December 2016.

Giudici, D., A. Recchia, D. D’Aria, A. Valentino, and A. M. Guarnieri. “A Flexible Frequency Domain Background Clutter SAR Simulator for GMTI Applications.” The 2015 IEEE International Geoscience and Remote Sensing Symposium, pp. 2099–2102, http://ieeexplore.ieee.org/document/7326216/, 2015.

Huang, Y., G. Liao, J. Xu, and J. Li. “GMTI and Parameter Estimation via Time-Doppler Chirp-Varying Approach for Single-Channel Airborne SAR System.” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 8, pp. 4367–4383,  http://ieeexplore. ieee.org/document/7934008/, August 2017.

Huang, Z., J. Xu, S. Peng, and Z. Wang. “A New Channel Balancing Algorithm in Image Domain for Multichannel SAR-GMTI System.” IET International Radar Conference 2015, pp. 1–5, http://ieeexplore.ieee.org/document/7455553/, 2015.

Huang, P., G. Liao, Z. Yang, X.-G. Xia, and J. Ma. “A New Method for Ground Moving Target Imaging With Single-Antenna SAR.” The 2016 CIE International Conference on Radar, pp. 1–4, http://ieeexplore.ieee.org/document/8059207/, 2016.

Huang, Z., J. Xu, L. Liu, T. Long, and Z. Wang, “SAR Ground Moving Targets Relocation via Co-Prime Arrays.” The 2017 IEEE Radar Conference, pp. 0792–0796, http://ieeexplore. ieee.org/document/7944311/, 2017.

Huang, Z.-Z., J. Xu, Z.-R. Wang, X.-G. Xia, T. Long, and M.-M. Bian. “Along-Track Velocity Estimation for SAR Moving Target in Complex Image Domain.” The 2016 CIE International Conference on Radar, pp. 1–5, http://ieeexplore.ieee.org/document/ 8059497/, 2016.

Huang, Z.-Z., Z.-G. Ding, J. Xu, T. Zeng, L. Liu, Z.-R. Wang, and C.-H. Feng. “Azimuth Location Deambiguity for SAR Ground Moving Targets via Coprime Adjacent Arrays.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 11, no. 2, pp. 551–561, http://ieeexplore.ieee.org/document/8265029/, February 2018.

Jiang, J., J. Liu, G. Zhang, and L. Wang. “Bayesian Compressive Sensing Based SAR Imaging for GMTI System.” The 20th International Conference on Information Fusion, pp. 1–8, http://ieeexplore.ieee.org/document/8009885/, 2017.

Jinbin, F., S. Jinping, S. Wei, and T. Xianzhong. “A Novel State-Dependent VS-IMM Tracker for GMTI Radar.” IET International Radar Conference 2015, pp. 1–5, http://ieeexplore.ieee. org/document/7455214/, 2015.

Jing, Y., and L. Yaan. “Moving Target Parameter Estimation Algorithm Using Contrast Optimization.” The 2016 13th International Bhurban Conference on Applied Sciences and Technology, pp. 731–734, http://ieeexplore.ieee.org/document/7429964/, 2016.

Kreucher, C. “Dismount Tracking by Fusing Measurements From a Constellation of Bistatic Narrowband Radar.” The 2011 Proceedings of the 14th International Conference on Information Fusion, pp. 0824–0829, http://ieeexplore.ieee.org/document/5977525/, 2011.

Lei, Y., D. Ke, and T. Zhi. “A Fast GMTI Method Using Data-Based Channel Calibration and SPECAN Processing.” IET International Radar Conference 2015, pp. 1–5, http:// ieeexplore.ieee.org/document/7455254/, 2015.

Li, J., Y. Huang, G. Liao, and J. Xu. “Moving Target Detection via Efficient ATI-GoDec Approach for Multichannel SAR System.” IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 9, pp. 1320–1324, http://ieeexplore.ieee.org/document/7509632/, 2016.

Li, X., M. Xing, G.-C. Sun, and Z. Bao. “A Novel Deramp Space-Time Adaptive Processing Method for Multichannel SAR-GMTI.” The 2015 IEEE International Geoscience and Remote Sensing Symposium, pp. 4272–4275, http://ieeexplore.ieee.org/document/7326770/, 2015.

Li, Y., T. Wang, B. Liu, L. Yang, and G. Bi. “Ground Moving Target Imaging and Motion Parameter Estimation With Airborne Dual-Channel CSSAR.” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 9, pp. 5242–5253, http://ieeexplore.ieee.org/document/ 7944700/, 2017.

Liu, W., T. Jin, B. Lu, and Z. Zhou. “An Anisotropic Feature Extraction Method Based on Space-Wavenumber Distribution for Circular SAR.” The 2016 Chinese Institute of Electronics (CIE) International Conference on Radar, pp. 1–4, https://ieeexplore.ieee.org/ document/8059433, 2016.

Long, Y., and G. Kuang. “A Block-Sparse Reconstruction Method for SAR GMTI.” The 2017 4th International Conference on Systems and Informatics, http://ieeexplore.ieee.org/ document/8248494/, 11–13 November 2017.

Lv, G., Y. Li, G. Wang, and Y. Zhang. “Ground Moving Target Indication in SAR Images With Symmetric Doppler Views.” IEEE Transactions on Geoscience and Remote Sensing, vol.  54, no. 1, pp. 533–543, http://ieeexplore.ieee.org/document/7208875/, 2016.

Mallick, M., Y. Bar-Shalom, T. Kirubarajan, and M. Moreland. “An Improved Single-Point Track Initiation Using GMTI Measurements.” IEEE Transactions on Aerospace and Electronic Systems, vol. 51, no. 4, pp. 2697–2714, http://ieeexplore.ieee.org/document/7376211/,  2015.

Mallick, M., B. La Scala, B. Ristic, T. Kirubarajan, and J. Hill. “Comparison of Filtering Algorithms for Ground Target Tracking Using Space-Based GMTI Radar.” The 2015 18th International Conference on Information Fusion, pp. 1672–1679, http://ieeexplore. ieee.org/document/7266757/, 2015.

Mehmood, A., J. Clark, and W. Sakla. “Skin-Based Hyperspectral Dismount Detection Using Sparse Representation.” The 2013 5th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, pp. 1–4, http://ieeexplore.ieee.org/document/ 8080720/, 2013.

Mertens, M., and M. Ulmke. “Context-Based Ground Target Tracking — an Integrated Approach.” The 2015 18th International Conference on Information Fusion (Fusion), pp. 1136–1143, http://ieeexplore.ieee.org/document/7266686/, 2015.

Mori, S., H. Hoang, P. O. Arambel, C. Rago, M. J. Shea, P. L. Davey, C.-Y. Chong, and S. J. Alter. “Group State Estimation Algorithm Using Foliage Penetration GMTI Radar Detections.” The 17th International Conference on Information Fusion, pp. 1–8, http://ieeexplore. ieee.org/document/6916154/, 2014.

Newey, M., S. Mishra, G. Benitz, and D. Barrett. “Mover Detection in Single Channel LiMIT SAR Data.” The 2015 IEEE Radar Conference, pp. 0998–1003, http://ieeexplore.ieee.org/ document/7131140/, 2015.

Oveis, A. H., and M. A. Sebt. “High Resolution Ground Moving Target Indication by Synthetic Aperture Radar Using Compressed Sensing.” The 2017 Iranian Conference on Electrical Engineering, pp. 1674–1679, http://ieeexplore.ieee.org/document/7985318/, 2017.

Page, D., G. Owirka, H. Nichols, S. Scarborough, M. Minardi, and L. Gorham. “Detection and Tracking of Moving Vehicles With Gotcha Radar Systems.” IEEE Aerospace and Electronic Systems Magazine, vol. 29, no. 1, pp. 50–60, http://ieeexplore.ieee.org/document/ 6750512/, January 2014.

Pastina, D., and F. Turin. “Exploitation of the COSMO-SkyMed SAR System for GMTI Applications.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 3, pp. 966–979, http://ieeexplore.ieee.org/document/6919998/, 2015.

Pillai, U., K. Y. Li, and S. Scarborough. “Target Geolocation in Gotcha Data Using Panoramic Processing.” The 2015 IEEE Radar Conference, pp. 0021–0026, http://ieeexplore.ieee. org/document/7130964/, 2015.

Piou, J. E. “Human Gait Extraction From Short and Sparse Radar Dwells.” The 2016 IEEE Radar Conference, pp. 1–5, http://ieeexplore.ieee.org/document/7485272/, 2016.

Que, R., O. Ponce, S. V. Baumgartner, and R. Scheiber. “Multi-Mode Real-Time SAR On-Board Processing.” Proceedings of EUSAR 2016:  11th European Conference on Synthetic Aperture Radar, pp. 1–6, http://ieeexplore.ieee.org/document/7559263/, 2016.

Raj, R. G., V. C. Chen, and R. Lipps. “Analysis of Radar Dismount Signatures via Non-Parametric and Parametric Methods.” The 2009 IEEE Radar Conference, pp. 1–6, http://ieeexplore. ieee.org/document/4977025/, 2009.

Riedl, M., and L. C. Potter. “Knowledge-Aided Bayesian Space-Time Adaptive Processing.” IEEE Transactions on Aerospace and Electronic Systems, vol. 54, no. 4, pp. 1850–1861, http://ieeexplore.ieee.org/document/8289377/, August 2018.

Shen, W., Y. Lin, Y. Zhao, L. Yu, and W. Hong. “Initial Result of Single Channel CSAR GMTI Based on Background Subtraction.” The 2017 IEEE Geoscience and Remote Sensing Symposium, pp. 976–979, http://ieeexplore.ieee.org/document/8127117/, 2017.

Sheng, H., C. Zhang, Y. Gao, K. Wang, and X. Liu. “Dual-Channel SAR Moving Target Detector Based on WVD and FAC.” The 2016 CIE International Conference on Radar, pp. 1–5, http://ieeexplore.ieee.org/document/8059269/, 2016.

Sjögren, T., and V. Vu. “Detection of Slow and Fast Moving Targets Using Hybrid CD-DMTF SAR GMTI Mode.” The 2015 IEEE Fifth Asia-Pacific Conference on Synthetic Aperture Radar, pp. 818–821, http://ieeexplore.ieee.org/document/7306329/, 2015.

Song, X., and W. Yu. “Processing Video-SAR data With the Fast Backprojection Method.” IEEE Transactions on Aerospace and Electronic Systems, vol. 52, no. 6, pp. 2838–2848, http://ieeexplore.ieee.org/document/7855587/, December 2016.

Stojanovic, I., L. Novak, and W. C. Karl. “Interrupted SAR Persistent Surveillance via Group Sparse Reconstruction of Multipass Data.” IEEE Transactions on Aerospace and Electronic Systems, vol. 50, no. 2, pp. 987–1003, http://ieeexplore.ieee.org/ document/6850195/, 2014.

Suwa, K., K. Yamamoto, M. Tsuchida, S. Nakamura, T. Wakayama, and T. Hara. “Image-Based Target Detection and Radial Velocity Estimation Methods for Multichannel SAR-GMTI.” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 3, pp. 1325–1338, http://ieeexplore.ieee.org/document/7748565/, March 2017.

Tahmoush, D., J. Clark, and J. Silvious. “Tracking of Dismounts Moving in Cross-Range Using GMTI Radar.” The 2010 11th International Radar Symposium, pp. 1–8, http://ieeexplore. ieee.org/document/5547468/, 2010.

Usmail, C. L., M. O. Little, and R. E. Zuber. “Evolution of Embedded Processing for Wide Area Surveillance.” IEEE Aerospace and Electronic Systems Magazine, vol. 29, no. 1, pp. 6–13, http://ieeexplore.ieee.org/document/6750507/, January 2014.

Vehmas, R., J. Jylhä, M. Väilä, and A. Visa. “Analysis and Comparison of Multichannel SAR Imaging Algorithms.” The 2017 IEEE Radar Conference, pp. 0340–0345, http:// ieeexplore.ieee.org/document/7944224/, 2017.

Wang, H., M. Jiang, and S. Zheng. “Airborne Ka FMCW MiSAR System and Real Data Processing.” The 2016 17th International Radar Symposium, pp. 1–5, http://ieeexplore. ieee.org/document/7497302/, 2016.

Wang, H., H. Zhang, S. Dai, and Z. Sun. “Azimuth Multichannel GMTI Based on Ka-Band DBF-SCORE SAR System.” IEEE Geoscience and Remote Sensing Letters, vol. 15, no. 3, pp. 419–423, http://ieeexplore.ieee.org/document/8280557/, March 2018.

Wang, W.-Q. “Multichannel SAR Using Waveform Diversity and Distinct Carrier Frequency for Ground Moving Target Indication.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 11, pp. 5040–5051, http://ieeexplore. ieee.org/document/7299597/, 2015.

Wang, W.-Q., S. Zhang, and P. Huang. “Simultaneous SAR Imaging and GMTI by Fractional Fourier Transform Processing.” The 2016 IEEE International Geoscience and Remote Sensing Symposium, pp. 1210–1213, http://ieeexplore.ieee.org/document/7729306/, 2016.

Wang, Z., J. Xu, Z.-Z. Huang, X.-D. Zhang, X.-G. Xia, and T. Long. “Road-Aided Doppler Ambiguity Resolver for SAR Ground Moving Target in the Image Domain.” IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 10, pp. 1552–1556, http://ieeexplore.ieee.org/ document/7551220/, 2016.

Weedon, W. H., and Z. T. White. “X-Band Phased Array Radar Testbed.” The 2013 IEEE International Symposium on Phased Array Systems and Technology, pp. 367–370, http://ieeexplore.ieee.org/document/6731856/, 2013.

Wu, D., M. Yaghoobi, and M. Davies. “Digital Elevation Model Aided SAR-Based GMTI Processing in Urban Environments.” The 2016 Sensor Signal Processing for Defence, pp. 1–5, http://ieeexplore.ieee.org/document/7590593/, 2016.

Wu, D., M. Yaghoobi, and M. Davies. “Sparsity Based Ground Moving Target Imaging via Multi-Channel SAR.” The 2015 Sensor Signal Processing for Defence, pp. 1–5, http:// ieeexplore.ieee.org/document/7288524/, 2015.

Xu, H., Z. Yang, G. Chen, G. Liao, and M. Tan. “A Ground Moving Target Detection Approach Based on Shadow Feature With Multichannel High-Resolution Synthetic Aperture Radar.” IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 10, pp. 1572–1576, http://ieeexplore.ieee.org/document/7547963/, 2016.

Xu, H., Z. Yang, M. Tian, Y. Sun, and G. Liao. “An Extended Moving Target Detection Approach for High-Resolution Multichannel SAR-GMTI Systems Based on Enhanced Shadow-Aided Decision.” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 2, pp. 715–729, http://ieeexplore.ieee.org/document/8064181/, February 2018.

Xu, L., C. Gianelli, and J. Li. “Long-CPI MIMO SAR Based GMTI.” The 2016 IEEE Radar Conference, pp. 1–4, http://ieeexplore.ieee.org/document/7485159/, 2016.

Xu, L., C. Gianelli, and J. Li. “Long-CPI Multichannel SAR-Based Ground Moving Target Indication.” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 9, pp. 5159–5170, http://ieeexplore.ieee.org/document/7478641/, September 2016.

Xu, L., C. Gianelli, and J. Li. “Long-CPI Multi-Channel SAR Based Ground Moving Target Indication.” The 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3121–3125, http://ieeexplore.ieee.org/document/7472252/, 2016.

Yang, X., and J. Wang. “GMTI Based on a Combination of DPCA and ATI.” IET International Radar Conference 2015, pp. 1–5, http://ieeexplore.ieee.org/document/7455424/, 2015.

Yang, J., C. Liu, and Y. Wang. “Detection and Imaging of Ground Moving Targets With Real SAR Data.” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 2, pp. 920–932, http://ieeexplore.ieee.org/document/6851893/, 2015.

Yang, T., Y. Wang, and W. Li. “A Moving Target Imaging Algorithm for HRWS SAR/GMTI Systems.” IEEE Transactions on Aerospace and Electronic Systems, vol. 53, no. 3, pp. 1147–1157, http://ieeexplore.ieee.org/document/7855658/, June 2017.

Yang, L., L. Zhao, G. Bi, and L. Zhang. “SAR Ground Moving Target Imaging Algorithm Based on Parametric and Dynamic Sparse Bayesian Learning.” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 6, pp. 2254–2267, http://ieeexplore.ieee.org/ document/7422105/, 2016.

Yang, L., L. Zhao, S. Zhao, and G. Bi. “Sparsity-Driven SAR Imaging for Highly Maneuvering Ground Target by the Combination of Time-Frequency Analysis and Parametric Bayesian Learning.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 10, no. 4, pp. 1443–1445, http://ieeexplore.ieee.org/document/7740075/, April 2017.

Yan, H., D. Zhu, R. Wang, and X. Mao. “Practical Signal Processing Algorithm for Wide-Area Surveillance-GMTI Mode.” IET Radar, Sonar & Navigation, vol. 9, no. 8, pp. 991–998, http://ieeexplore.ieee.org/document/7272183/, 2015.

Yan, H., R. Wang, X. Mao, J. Zhang, D. Zhu, D. Wu, Y. Li, and Y. Mao. “Clutter Suppression and Parameter Estimation Based on the Relax Algorithm in WAS-GMTI Mode.” Proceedings of EUSAR 2016:  11th European Conference on Synthetic Aperture Radar, pp. 1–4, http://ieeexplore.ieee.org/document/7559314/, 2016.

Yu, M., C. Liu, B. Li, and W.-H. Chen. “An Enhanced Particle Filtering Method for GMTI Radar Tracking.” IEEE Transactions on Aerospace and Electronic Systems, vol. 52, no. 3, pp. 1408–1420, http://ieeexplore.ieee.org/document/7511867/, June 2016.

Zhang, S.-X., M.-D. Xing, X.-G. Xia, R. Guo, Y.-Y. Liu, and Z. Bao. “Robust Clutter Suppression and Moving Target Imaging Approach for Multichannel in Azimuth High-Resolution and Wide-Swath Synthetic Aperture Radar.” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 2, pp. 687–709, http://ieeexplore.ieee.org/document/6835178/, 2015.

Zhang, S., F. Zhou, G.-C. Sun, X.-G. Xia, M.-D. Xing, and Z. Bao. “A New SAR–GMTI High-Accuracy Focusing and Relocation Method Using Instantaneous Interferometry.” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 9, pp. 5564–5577, http://ieeexplore.ieee.org/document/7484303/, 2016.

Zheng, H., J. Wang, and X. Liu. “Ground Moving Target Indication via Spatial Spectral Processing for Multichannel SAR.” The 2015 8th International Congress on Image and Signal Processing, pp. 760–764, http://ieeexplore.ieee.org/document/7407979/, 2015.

Zheng, H., J. Wang, and X. Liu. “Motion Parameter Estimation for Multichannel SAR-GMTI Systems.” The 2016 CIE International Conference on Radar, pp. 1–4, http:// ieeexplore.ieee.org/document/8059542/, 2016.

Zheng, S., X. Li, H. Wang, J. Niu, and S. Chen. “Signal Processing for Ka-Band FMCW Miniature SAR/GMTI System.” The 2015 16th International Radar Symposium, pp. 541–546, http://ieeexplore.ieee.org/document/7226307/, 2015.

 


Want to learn more about this topic?