Course Schedule
July 15th
Time Instructor Topic Session Chair
08:30-09:00 Hui Huang

◌ Opening Ceremony ◌

09:00-10:20 Dani Lischinski Introduction to Deep Learning and GANs Di Lin
10:40-12:00 Edmond S. L. Ho Interaction-based Human Activity Comparison, Analysis and Synthesis
14:30-15:50 Paul Guerrero Learning Irregular Structure in 2D and 3D Data Qian Zheng
16:10-17:30 Di Lin 深度学习在高层次语义识别中的应用
19:00-21:00 Jiacheng Ren & Jin Shen 3Ds Max


July 16th
09:00-10:20 Daniel Ritchie Indoor Scene Synthesis: Past, Present and Future Pengfei Xu
10:40-12:00 Min Lu Visualization Enhancement for Better Data Understanding and Exploration
14:30-15:50 Weiwei Xu 深度特征驱动的计算机图形学算法研究 Ke Xie
16:10-17:30 Pengfei Xu Sketch-based Techniques for 3D Content Creation
19:00-21:00 Jiacheng Ren & Jin Shen 3Ds Max


July 17th
09:00-10:20 Oliver Deussen Data Visualization: An Introduction Min Lu
10:40-12:00 Yang Zhou Analysis and Synthesis of Non-stationary Textures
14:30-14:50 Zhijie Wu SAGNet: Structure-aware Generative Network for 3D-Shape Modeling Tan Zhang
14:50-15:10 Yue Jiang ORC Layout: Adaptive GUI Layout with OR-Constraints
15:10-15:50 Ke Xie Drones in Computer Graphics
16:10-17:30 Qian Zheng Introduction on Reconstruction of Static and Dynamic Objects
19:00-21:00 Jiacheng Ren & Jin Shen 3Ds Max


July 18th
09:00-10:20 Kai Xu

◌ VCC Special Session: Graphics-driven Smart Robot ◌

Yang Zhou
10:40-11:20 Tan Zhang
11:20-12:00 Pengdi Huang
14:30-16:00 VCC Faculty

◌ Q/A Panel ◌

Pengdi Huang
16:00-17:30 All

◌ Closing Ceremony ◌


Course Abstracts

Introduction to Deep Learning and GANsDani Lischinski

In the course of the last 7 years, deep learning has revolutionized many fields, including computer vision and computer graphics. In this talk, I will give an introduction and an overview of convolutional neural networks (CNNs), as well as generative adversarial networks (GANs) which have been extremely effective for a variety of generative tasks.

Interaction-based Human Activity Comparison, Analysis and SynthesisEdmond S. L. Ho

Traditional methods for motion comparison consider features from individual characters. However, the semantic meaning of many human activities is usually defined by the interaction between them, such as a high-five interaction of two characters. There is little success in adapting interaction-based features in activity comparison, analysis and synthesis, as they either do not have a fixed topology or are in high dimensional. In this talk, I will review some of the existing work in modelling the interactions in human activity. I will also introduce some of the recent work from our research group on the related topics. Finally, the challenges and future research directions will be discussed in the talk.

Learning Irregular Structure in 2D and 3D DataPaul Guerrero

In shape analysis, several interesting applications require us to reason about the arrangement, layout, or geometric relationships between individual regions of a shape, as opposed to focusing on their individual properties in isolation. For simplicity, I refer to these non-unary properties as the structure of a shape. Applications such as high-level semantic editing of 2D or 3D objects, robust asset extraction from images or videos, exploration of large shape collections, assisted shape editing, or (semi-) automatic content generation, benefit greatly from working with this structure. However, shape structure has been less accessible to deep learning, since both the spatially local convolutions and the regular grid representation used traditionally in deep learning make working with structure unnecessarily hard. In this talk I am going to present alternatives we found to be more effective in our current research, such as graph representations for learning 2D layouts, and hierarchical graph networks for learning 3D shape structure, and I will show how the resulting structure-aware approach benefits several of the applications mentioned above.

深度学习在高层次语义识别中的应用Di Lin


Indoor Scene Synthesis: Past, Present and FutureDaniel Ritchie

Virtual indoor scenes are a critical resource for furniture and interior design companies, architects, virtual and augmented reality experience designers, and computer vision and robotics researchers. Thus, a system which can synthesize novel indoor scenes would be valuable. In this tutorial, I’ll take students on a brief tour of this field of indoor scene synthesis: from its origins in rule/constraint-based systems, through the introduction of data-driven methods, and culminating with state-of-the-art research in deep generative models of scenes. We'll spend majority of our time discussing the most cutting-edge methods, including analyzing their strengths and weaknesses. The tutorial will conclude with a look toward the future of the field: open problems, opportunities, and ongoing work.

Visualization Enhancement for Better Data Understanding and ExplorationMin Lu

Visualization help people understand the data in an intuitive way. As the complexity of data increases, how to make the visualization visual clearly is challenging. This work will introduce our latest two works in visualization enhancements. One is about the visual design in scatterplot, Winglets, to strengthen the association perception and associating uncertainty. The other is enhancement technique, to improve the interactivity for web-based visualization.

深度特征驱动的计算机图形学算法研究Weiwei Xu


Sketch-based Techniques for 3D Content CreationPengfei Xu

Sketch-based interfaces have been adopted by a lot of 3D content creation applications, due to its ability of depicting complicated ideas with simple forms. In this talk, I will give a brief overview of sketch-based 3D content creation techniques, and will focus more on my recent related research on this topic.

Data Visualization: An IntroductionOliver Deussen

In data visualization large amounts of data are converted into images in order to use the human eye to find interesting patterns that can later be analysed by data mining techniques. In my talk I will give brief overview about various data visualization techniques and will mention the most important research questions of this field.

Analysis and Synthesis of Non-stationary TexturesYang Zhou

The real world exhibits an abundance of non-stationary textures. Examples include textures with large scale structures, as well as spatially variant and inhomogeneous textures. While existing example-based texture synthesis methods can cope well with stationary textures, non-stationary textures still pose a considerable challenge, which remains unresolved. In this talk, I will first introduce our work on analysis and controlled synthesis of inhomogeneous textures, where traditional (non-deep) patch-based method is used. Then our recent work will be introduced on employing deep generative model for non-stationary texture synthesis. The power of deep neural networks make our method can cope with very challenging textures, which, to our knowledge, no other existing method can handle.

SAGNet: Structure-aware Generative Network for 3D-Shape ModelingZhijie Wu

We present SAGNet, a structure-aware generative model for 3D shapes. Given a set of segmented objects of a certain class, the geometry of their parts and the pairwise relationships between them (the structure) are jointly learned and embedded in a latent space by an autoencoder. The encoder intertwines the geometry and structure features into a single latent code, while the decoder disentangles the features and reconstructs the geometry and structure of the 3D model. Our autoencoder consists of two branches, one for the structure and one for the geometry. The key idea is that during the analysis, the two branches exchange information between them, thereby learning the dependencies between structure and geometry and encoding two augmented features, which are then fused into a single latent code. This explicit intertwining of information enables separately controlling the geometry and the structure of the generated models. We evaluate the performance of our method and conduct an ablation study. We explicitly show that encoding of shapes accounts for both similarities in structure and geometry. A variety of quality results generated by SAGNet are presented.

ORC Layout: Adaptive GUI Layout with OR-ConstraintsYue Jiang

Nowadays designers have to create different layouts for different screen sizes and orientations to get desirable results. This makes GUI design more time-consuming. Also, multiple specifications can be hard to maintain and to keep consistent. To solve this problem, we proposed a novel approach for constraint-based graphical user interface (GUI) layout based on OR-constraints (ORC) in standard soft/hard linear constraint systems. Our aim was to develop a technology that can adapt layouts to screens with different sizes, orientations, and aspect ratios based on only one single layout specification.

Drones in Computer GraphicsKe Xie

Nowadays UAV is getting popular, the applications based on low cost drones are getting very common, such as aerial video, large scene reconstructions, photographics by drone. However, for these applications, it's very hard for user to simultaneously control the drone's positions and orientations. In our study, we have been trying to make it automatic by predefining trajectories and real time flight command generations. In this talk, the newest research works of VCC@SZU in this field will be introduced to the students.

Introduction on Reconstruction of Static and Dynamic ObjectsQian Zheng

With rapid developments in 3D scanning and photogrammetry, it is now possible to efficiently capture static and dynamic objects in the real world. This attracts intensive research interests in object acquiring, analyzing, reconstructing and understanding. Nonetheless, obtaining a faithful 3D representation of real-world static and dynamic objects still remains an open problem due to various technical challenges. In this talk, I would like to introduce these difficulties, together with my work in the past few years with respect to solutions on reconstructing buildings, articulated objects, blooming flowers, and inferring human interaction motions and beyond.

Deep Hierarchical Models for 3D Shape Understanding and GenerationKai Xu

In this talk, I will introduce our recent works on learning Deep Hierarchical Models for the context-aware understanding and structure-aware generation of 3D shapes. Complex 3D shapes are typically represented as a hierarchy of objects, which is evidenced by the widely adopted hierarchical scene graphs of CAD models. Deep convolutional neural networks achieve hierarchical representation learning with the help of pooling operations. Such feature hierarchies, however, do not explicitly capture and meaningfully model the spatial relations between different neuron activations, thus often resulting in unreasonable understanding and implausible generation. Our key insight is that 3D structures can be effectively characterized by a hierarchical organization of its constituent parts, which explicitly encodes part relations such as proximity, symmetry, etc. Through extending our GRASS model, a recursive neural network architecture for the hierarchical modeling of 3D shape structures, we develop a series of deep hierarchical models both for shape understanding and shape generation. We show that deep hierarchical models, with explicit relation modeling, attains much more powerful context-aware analysis and structure-aware synthesis than CNNs.

Deep Reinforcement Learning for Vision-Based Robotic ManipulationTan Zhang

Robots can perform complex tasks in highly dynamic and unstructured environments. However, the vision system can be complex and prone to errors and designing the perception and control software for autonomous operation remains a major challenge. Reinforcement learning algorithms hold the promise of enabling robots to automatically learn new behaviors through experience. In this talk, I will discuss how the robots use reinforcement learning algorithms to learn manipulation skills from visual observation of the manipulator, without any prior knowledge of configuration or joint state. I will include several state-of-the-art reinforcement learning algorithms and give several recent applications for different vision-guided robotic manipulation tasks.

Autonomous Outdoor Scanning via Online Topological and Geometric Path OptimizationPengdi Huang

Autonomous 3D acquisition of outdoor environments poses special challenges. Different from indoor scenes, where the room space is delineated by clear boundaries and separations (e.g., walls and furniture), an outdoor environment is spacious and unbounded (thinking of a campus). Therefore, unlike for indoor scene where the scanning effort is mainly devoted to the discovery of boundary surfaces, scanning an open and unbounded area requires actively delimiting the extent of scanning region and dynamically planning a traverse path within that region. We approach the planning of an energy-efficient auto-scanning course for outdoor scenes, through formulating a discrete-continuous optimization of robot scanning paths. The discrete optimization computes a topological map, through solving an online Orienteering Problem (OP), which determines the scanning goals and paths on-the-fly. The dynamic goals are determined as a collection of visit sites with high reward of visibility-to-unknown. A visit graph is constructed via connecting the visit sites with edges weighted by traversing cost. This topological map evolves as the robot scans via deleting outdated sites that are either visited or become rewardless and inserting newly discovered ones. The continuous part optimizes the traverse paths geometrically between two neighboring visit sites via maximizing the information gain of scanning along the paths. The discrete and continuous processes alternate until the traverse cost of the current graph exceeds the remaining energy capacity of the robot. Our experiments demonstrate the effectiveness and advantages of the proposed method.


Address: 821, Building of CSSE, SZU South Campus, GD, China
Zipcode: 518060

地址: 广东省深圳市深圳大学南校区计算机与软件学院 821
Zipcode: 518060



Copyright © Visual Computing Research Center, Shenzhen, P.R.China