13
Learning to Fuse Things and Stuff Jie Li*, Allan Raventos*, Arjun Bhargava*, Takaaki Tagawa, Adrien Gaidon Toyota Research Institute (TRI) {jie.li, allan.raventos, arjun.bhargava, takaaki.tagawa, adrien.gaidon}@tri.global Abstract We propose an end-to-end learning approach for panop- tic segmentation, a novel task unifying instance (things) and semantic (stuff) segmentation. Our model, TASCNet, uses feature maps from a shared backbone network to predict in a single feed-forward pass both things and stuff segmen- tations. We explicitly constrain these two output distribu- tions through a global things and stuff binary mask to en- force cross-task consistency. Our proposed unified network is competitive with the state of the art on several benchmark datasets for panoptic segmentation as well as on the indi- vidual semantic and instance segmentation tasks. 1. Introduction Panoptic segmentation is a computer vision task recently proposed by Kirillov et al. [15] that aims to unify the tasks of semantic segmentation (assign a semantic class label to each pixel) and instance segmentation (detect and seg- ment each object instance). This task has drawn attention from the computer vision community as a key next step in dense scene understanding [14, 18, 28], and several pub- licly available benchmark datasets have started to provide labels supporting this task, including Cityscapes [8], Map- illary Vistas [27], ADE20k [44], and COCO [22]. To date, state-of-the-art techniques for semantic and in- stance segmentation have evolved in different directions that do not seem directly compatible. On one hand, the best semantic segmentation networks [3, 43, 6] focus on dense tensor-to-tensor classification architectures that excel at rec- ognizing stuff categories like roads, buildings, or sky [1]. These networks leverage discriminative texture and contex- tual features, achieving impressive results in a wide variety of scenes. On the other hand, the best performing instance segmentation methods rely on the recent progress in object detection - they first detect 2D bounding boxes of objects and then perform foreground segmentation on regions of in- terest [29, 12, 25]. This approach uses the key insight that things have a well-defined spatial extent and discriminative appearance features. (a) Semantic segmentation (b) Instance segmentation (c) Things and Stuff Consistency (d) Panoptic segmentation Figure 1: We propose an end-to-end architecture for panop- tic segmentation. Our model predicts things and stuff with a shared backbone and an internal mask enforcing Things and Stuff Consistency (TASC) that can be used to guide fusion. The fundamental differences in approaches between han- dling stuff and things yields a strong natural baseline for panoptic segmentation [15]: using two independent net- works for semantic and instance segmentation followed by heuristic post-processing and late fusion of the two outputs. In contrast, we postulate that addressing the two tasks together will result in increased performance for the joint panoptic task as well as for the separate semantic and in- stance segmentation tasks. The basis for this hypothesis is the explicit relation be- tween the tasks at the two ends of the modeling pipeline: i) early on at the feature level (capturing general appearance properties), and ii) at the output space level (mutual exclu- sion, overlap constraints, and contextual relations). Therefore, the main challenge we address is how to for- mulate a unified model and optimization scheme where the sub-task commonalities reinforce the learning, while pre- venting the aforementioned fundamental differences from leading to training instabilities or worse combined perfor- mance, a common problem in multi-task learning [42]. Our main contribution is a deep network and end-to-end 1 arXiv:1812.01192v2 [cs.CV] 16 May 2019

FNA Group, Inc. Pleasant Prairie, WI 53158€¦ · Usando una llave Allen de 8 mm, quite el tapón de aceite de metal de la parte superior de la bomba. Al girar la llave en sentido

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: FNA Group, Inc. Pleasant Prairie, WI 53158€¦ · Usando una llave Allen de 8 mm, quite el tapón de aceite de metal de la parte superior de la bomba. Al girar la llave en sentido

ATTENTION! BEFORE OPERATING THIS PUMP, REMOVE THE METAL SHIPPING PLUG FROM THE TOP OF THE PUMP AND INSTALL THE BREATHER.

ATTENTION! AVANT DE FAIRE FONCTIONNER CETTE POMPE, ENLEVEZ LE BOUCHON D’EXPEDITION ET INSTALLEZ LE RENIFLARD.

¡ATENCIÓN! ANTES DE OPERAR ESTA BOMBA,REMUEVA EL TAPON DE EMBARQUE E INSTALE EN SU LUGAR EL RESPIRADERO.

1. Using an 8mm allen wrench, remove the metal oil plug from the top of the pump. Turning the wrench counter-clockwise will loosen the plug.1. Usando una llave Allen de 8 mm, quite el tapón de aceite de metal de la parte superior de la bomba. Al girar la llave en sentido antihorario se afloja el tapón.1. En utilisant une clé Allen de 8mm, retirez le bouchon d'huile de métal à partir du haut de la pompe. Tourner la clé dans le sens antihoraire va desserrer le bouchon.

2. Thread in the supplied breather by hand turning clockwise. Use a crescent wrench to fully tighten the breather to the pump.2. Enrosque en el respiradero entregado con la mano girando hacia la derecha. Utilice una llave inglesa para apretar completamente el respiradero a la bomba.2. Enfiler dans le reniflard fourni par la main dans le sens horaire tournant. Utilisez une clé à molette pour serrer le reniflard à la pompe.

FNA Group, Inc.Pleasant Prairie, WI 53158