Max-Planck-Institut für Informatik
IMPRS-CS: International Max-Planck Research School for Computer Science
Campus E1 4, Room 115A
Get my email address via email
CV: Curriculum Vitae
My research interests include but are not limited to...
- Computer Graphics
- Physical Simulations
- Image Processing
- Pattern Detection
- Computer Vision
- Machine Learning
- Aritificial Intelligence
Unpublished papers available upon request.
- Cheema, N., Fridman, L., Rosenholtz, R. and Zetzsche, C. (2016): Optimum statistical representation obtained from an intermediate feature level of the visual hierarchy. KogWis 2016.
- Zetzsche, C., Rosenholtz, R., Cheema, N., Gadzicki, K., Fridman, L. and Schill, K.(2017): Neural Computation of Statistical Image Properties in Peripheral Vision. HVEI-SPIE 2017.
- Fridman, L., Cheema, N., Wolfe, B. A., Zetzsche, C., Reimer, B. and Rosenholtz, R. (submitted for publication): A Fast Foveated Fully Convolutional Network Model for Human Peripheral Vision.
- Cheema, N., Fridman, L., Rosenholtz, R. and Zetzsche, C. (Yet unpublished): Neurobiologically realistic model of statistical pooling in peripheral vision.
- Neurocomputation in the Visual Periphery
The purpose of the project is getting a deeper understanding of what happens in our peripheral vision, since representation appears coarser in it.
It is assumed to be a strategy to deal with an information bottleneck in visual processing. The phenomenon of crowding shows that the reduction of information available in the
periphery is not merely the result of reduced resolution. There has been several cases showing that the information available can be represented with local summary statistics.
Such "texture-like" representation predicts the often associated "jumble" of features. One of the goals of our research is to find a better model of the encoding in the visual
cortex through a more biologically inspired neurocomputational approach.
My task in this is the generation of images that represent such a foveated view with the crowding effect in the periphery. For this I reconstructed the images from the local
summary statistics of the original image, by tiling the image into local pooling regions and extracting the statistics within each region. The reconstruction is then done via
gradient descent with respect to the initial white noise image using the standard forward-backward pass from Caffe. The newly constructed image then has the same local
summary statistics as the original image. These images are similar to the "Mongrels" by Rosenholtz et al. or
"Metamers" by Freeman and Simoncelli, with the execption that we use the correlations of the featuremaps of merely
one layer of a pretrained high-performing convolutional neural network as our only local summary statistic, instead of carefully hand-crafted features. The best performing layer
is determined visually, by classification results and with different distance measures. Furthermore, we replaced the multiplication operations in the computation of the correlations
with AND-like operations using a local response normalization (more under Publications). This way, our model is closer to the actual biological operations than prior models.
The implementation was mainly done by me in Python using Caffe for extracting the feature maps. Currently, I am working on porting everything to C++ and CUDA to be able to
generate these images much faster and to be able to generate enough images this way to use them for scene recognition, as we hope that these images may improve scene
Used Languages: Python, C++ and CUDA
Used Frameworks: BVLC Caffe, Scipy and NumPy
Reference: Dr. Christoph Zetzsche
- 2D Game-Engine (private)
When I have enough time, I try to work on my 2D Game-Engine. The latest and most stable version can be found in
the "Platformer" directory. The whole project started out as a school project with my team mate Nina Döge, whose tasks were to implement an easy to use animation framework
and game logic for the engine. The engine is implemented in C# and GLSL.
My task was to implement the shaders and the physics-engine, which supports circle to polygon collision (both convex and concave polygons) and polygon to polygon collision
(also convex and concave), joints and joint collision. The computation of the convex hull of an object is done via a combination of the Quickhull algorithm and a Solbel filter.
The filter is implemented in GLSL, as well as the shaders. Collision detection is done via the separating axis test and Voronoi regions. These computations are accelerated by me with
OpenMP-like for-loops, GPU computations, look-up tables, bit shifts and acceleration structures, such as a Quadtree and Spawners. Furthermore, the engine supports variable
forces such as gravity, drag and restitution. Thus, different materials and environments can also be simulated.
Used Languages: C# and GLSL
Used Frameworks: SFML 2 for rendering and the .NET framework
Reference: Thomas Jahn
- 3D Physics-Engine (private)
This engine was a project for
Computer Graphics at the
University of Applied Sciences Bremen to teach myself how 3D physics engines are basically working.
It was implemented in C++.
The engine supports collisions between spheres, convex polyhedra and cylinders. Collision detection is done via separating axis test. The rotation was implemented using
Quaternions. The engine also supports variable forces, such as gravity, drag and restitution. Thus, different materials and environments can be simulated.
Prototyping was done in Matlab.
Used Languages: C++
Used Frameworks: OpenGL and GLFW 3 for rendering
Reference: Prof. Dr. Peter Krug
- Libraries and Frameworks:
- Integrated Development Environments:
- Operating Systems:
Mac OS X,
Ubuntu 16.04 LTS,
Ubuntu 14.04 LTS
- Design Software:
Adobe Premiere Pro,
Adobe After Effects,
German (fluent), English (fluent), Urdu (experienced), Hindi (oral, experienced), Punjabi (some experience), Spanish (fundmental knowledge), Latin (basics)
- Painting / Drawing
- 3D Modeling
- Cat Videos