The theme for June here on the blog is 3d fundamentals and I wouldn’t be doing you any favors if I didn’t spend some time sharing a little bit about the big picture of how 3d art is created.

So this is going to be a high level overview of the different steps you need to know about in order to produce amazing art with Blender. Some of them you may already know about, but some of them may be a little foreign to you. You don’t have to master all or even most of these in order to be a great 3d artist, but you should at least know what they are and be able to communicate with people who work in other parts of the pipeline in case you ever need to work with them on a project or in case you decide to expand into a new area in the future.

So just what is a 3d production pipeline? So glad you asked, let’s break things down.

The Big Picture

The 3d production pipeline: several of the steps in the pipeline touch and even bleed into one another.
The 3d production pipeline: several of the steps in the pipeline touch and even bleed into one another. Pictured here are the main steps of the pipeline from left to right: modeling, shading, lighting, animation, and rendering. Scattered throughout are sub steps that bleed into the larger steps. These include sculpting, retopologizing, uv mapping, texture painting, scene layout, environment configuration, rigging, skinning, and scene optimization.

What I’ve tried to do here is give you an idea of how one step in the pipeline flows into another step starting from the far left and moving towards the right.

Now you may have noticed that the pieces don’t all line up in a straight line and there’s a very good reason for this. Go find two Blender artists (or any artists) whether or not they think texturing or lighting should happen first and you may very well end up with white I like to call a ‘Blender brawl’. Depending on what kind of project you’re working on it may make more sense to light or texture your scene out of order just to make things more efficient and easier to produce. So certain steps of the pipeline are staggered to account for the need to maintain flexibility during production.

The Difference Between an Independent Artist and a Larger Company

It’s all about what makes the most sense for your situation. Large scale projects frequently have parts of the pipeline that overlap one another and some major visual effects studios have people from more than a few departments working simultaneously on different parts of a project in order to meet a deadline. While a small team of just a few artists may find it necessary to tackle each step in the pipeline in a very linear manor so that they can stay organized and avoid the need for constantly reworking assets for a project.

Indie Artist vs. A Company
Typically an independent artist is a generalist because they end up wearing many different hats and managing the pipeline largely on their own. However, a company can afford to have many specialists that are responsible for one or a few jobs each and help each other complete the pipeline by complementing each other’s strengths.

As a 3d artist you can choose to familiarize yourself as much or as little as you want with each step, but choose carefully. People who try to learn a little about everything are referred to professionally as ‘3d generalists’. And people who focus on a particular area or two are usually referred to professionally according to that particular step in the pipeline such as a ‘3d modeler’ or a ‘texture artist’, etc. Choosing whether or not you should specialize in an area is a much larger topic for another article; however, if you’re just starting out then I recommend learning a little bit about each step until you figure out what interests you the most. Because even the 3d specialists have to learn how to communicate with all the other artists in the pipeline to be successful in their careers.

So now that you know why you should be aware of the 3d production pipeline let’s take a look at each step and what happens during the process.

Understanding the Production Phases

So if you’re reading this I’m going to assume you may just be getting started with your journey into 3d graphics. I’ll explain each of the main steps that make up the 3d production pipeline so that you can get familiar with the process of how a 3d artist creates graphics for visual effects, animation, or illustrations.


Modeling is one of my favorite steps in the pipeline probably because it was the very first thing I learned how to do when I got started with Blender.

The final concept car created for my course BlendMasters Volume 1: Model a Concept Car for Film in Blender.

Modeling is the process of creating the mesh objects that are used to represent the physical elements in your 3d environment. This is done in many ways including traditional hard surface modeling, sculpting, and procedural mesh generation.

So almost everything that you see in a 3d render is made up of a 3d mesh. These mesh objects can be created using a virtual plethora of techniques including traditional hard surface techniques, procedural modifiers to generate and shape your meshes, as well as 3d sculpting. The techniques used to model an object vary based on the type of object being modeled, the final purpose for the object in a project, and the ability or preference of the modeling artist(s) involved.

The definition of a vertex, edge, and face within the realm of 3d graphics.
The definition of a vertex, edge, and face within the realm of 3d graphics.

A mesh is made up of an interlocking series of vertices, edges, and faces. These three elements fit together to make what are called polygons. Polygons are connected along the edges and points like a puzzle to create a 3d surface which can be shaped and manipulated to create virtually any kind of object you can imagine. Polygons can be made up of three sides (triangles), four sides (quads), or five or more sides (known in Blender as n-gons). Depending on how you model an object it may be important to use only triangles or quads.

A quick concept sculpture I did in Blender a few years back.

When sculpting a 3d mesh less emphasis is placed on the type of polygons or the number of polygons used to build a model because when you sculpt an object you usually have to convert your final sculpted mesh into an alternate form that uses less polygons. This is especially true when designing models that are used for applications such as video games as games require a very high frame rate to function as a realtime application. Because most games have several hundred or thousand objects on the screen at a time you need to keep your polygon count in your models relatively low to save room in the scene for other objects.

Original image from Wikipedia and
Original image from Wikipedia and

There are alternative forms of 3d modeling that are less mainstream than the ones mentioned above. People in the manufacturing industries typically use 3d modeling techniques that employ the use of non-polygonal geometry because it tends to be more accurate. One of these types of other geometry used are called NURBS (or Non Uniform Rational B-Splines). These are 3d surfaces resembling those made by polygons, but instead are generated from boundary curves. The benefit of using these types of surfaces is that they are infinitely smooth and maintain the accuracy of their dimensions to a very high degree. You can imagine the importance of using geometry like this when manufacturing automobiles or electronics. But the entertainment industry has mostly done away with NURBS modeling due to both the massive leaps forward in computing power needed to handle large amounts of polygonal geometry coupled with the fact that, traditionally speaking, it was much more difficult to generate NURBS based geometry than it was to model using polygonal techniques. So over time NURBS, while hiding quietly in the background, have all but been abandoned for most of the digital arts world.

Check out more about this open source headset at
Check out more about this open source headset at

One other form of alternative modeling I want to mention because it has such large implications (especially at the release time of this article) is the ability to model geometry using VR, or virtual reality, techniques. Many mainstream companies continue to develop new technology to take advantage of the craze provided by the games industry for VR entertainment and they are creating new techniques that allow artists to actually create content in a virtual world. This is big news for realtime artists as they will actually be able to see their 3d assets rendered in realtime as they create them. Time will tell what amazing breakthroughs the industry will bring forth as necessity paves the way for ingenuity.


Your models wouldn’t be anything but gray blobs of clay without the shading process. It can add even more depth and character to your models than modeling by effecting the surface properties of your models.

Even something as simple as an apple can have a complex shading network.

Shading is the process of creating materials and applying them to the surfaces of your models. In Blender this is done through creating custom built node networks that describe a material and by adding stylistic or photographic textures to the material where needed. Other processes involved in shading include UV unwrapping, texture painting, and building fully procedural materials using mathematically based procedural materials without images.

Shading is vital because it gives each surface of your object varying degrees of physical definition including physical properties such as color, reflectivity, and displacement on top of the surface like scratches or bumps. Combined with a well built model shading can add life and microscopic levels of detail that really bring your object to life.

Blender’s primary rendering engine is called ‘Cycles’ (more on Cycles in the rendering step of the pipeline). Cycles is a physically based rendering engine, but that doesn’t mean that you can’t create a stylized look while using it. A material is defined by using a series of interconnected nodes that, when put together in the proper configuration, describe how a particular material looks when rendered.

When shading a scene you’ll encounter a few terms that you’ll need to be aware of so that you can understand how to build a shader effectively.

Materials, nodes, and texturesA shader or material is the object in Blender that describes the total look of the surface on a particular object. Inside a material you will find several elements that describe specific aspects of the surface such as the color, reflectivity, and the irregular bumps. These are just a few examples of the elements that are combined inside a material to describe the overall look of a 3d surface. These elements are combined in Cycles by using the nodes mentioned earlier as well as photographs or hand painted images that are loaded into Blender and applied to the surface. These images are known as textures and are vital to creating feature-rich materials for your 3d models.


The lighting step is really where all your hard work starts to pay off because it sets the mood for your entire scene. Even high quality models with ultra realistic shading can fall flat without the proper lighting.

The concept car updated for my course BlendMasters Volume 2: Shading a Car in the Cycles Engine
The concept car updated for my course BlendMasters Volume 2: Shading a Car in the Cycles Engine.

Lighting is the process of adding illumination to a 3d scene through the addition of lamps, emissive materials, or image based lighting with high dynamic range images (HDRs). These probably aren’t the only possible forms of lighting in Blender, but they are the most common.

Blender has many different types of lamps to help you light your scenes. And each lamp type was created to mimmic real world lighting with unique methods for lighting direction, falloff, and shadow characteristics.

The Sun Lamp

Sun lamps mimmic the directional lighting provided by the sun and only output light rays that are essentially parallel. These are best used for very large scale light sources that are outside the scene such as a sun or a moon.


Point lamps are omnidirectional like household light bulbs. Spot lamps are uniquely directional like the headlights on a car or the can lights used in theatre. You are given much more control over how the shadows are formed with spot lights due to how their physical counterparts in the real world are constructed.


Spot lamps are great for achieving a very theatrical type of lighting and are known for their  strong beam in the center with a halo type shadow falling off to the edges. With spot lamps you will have additional control for the falloff of the outside shadow and you can transition in a very harsh or a very soft manor depending on what kind of mood you want your lighting to have. Typically these types of lamps are used to create a lighting setup that feels more theatrical; however, there are other types of lights where a spot light may be appropriate such as the headlights on a car. Any time a real world light uses a lens to help focus the beam a spot light can be used to mimmic this behavior.


Area lights are capable of spreading around a lot of soft even lighting around a scene such as the lighting found in studio photography. They are often used to create ultra soft shadows when showing off a product or a model.


Hemi lamps, or hemisphere lamps, are unique to Blender and simulate the type of lighting provided outdoors by the environment. They are somewhat of a deprecated object when using the Cycles rendering engine, because in the past they were needed as a shortcut to simulate the environmental bounce light in a very large outdoor scene; however, Cycles (being a physically based render engine) doesn’t need the aid of Hemi lamps to provide the bounce light during rendering so they are not used.

An emissive material inside Blender.Emissive materials are materials that generate their own source of light from the surface of an object. To get a mental picture of this you can think of a firefly’s tail or glow in the dark paint. To create an emissive material you simply assign it an emissive shader while using Cycles and it will act as a light source in the current scene.


HDRIs or High Dynamic Range Images come in many flavors and can be used to accurately recreate the lighting from a real world location in a 3d scene. The basic idea is to capture an HDRI at a location using a camera that can take multiple exposures of an environment. Then with some additional software you can combine these different images into a single image that contains all of the lighting information from the brightest brights to the darkest darks in a single image. 3d software like Blender can interpret this data and actually use the brightest parts of the image as simulated light sources in a scene. The great thing about an HDR is that you can actually kill two birds with one stone when you use one. Not only do they give you extremely accurate lighting for your scene, but they also provide an automatic source of environmental reflections for all of your objects. Using an HDR you can actually film a video clip at a location and then place a 3d object back into the shot. This is how all the big hollywood movies add those impossible 3d visual effects into our favorite movies.

So as you can see lighting is one of the most important things you can control when you are putting together your 3d scene. It effectively sets the stage, and the mood, for your environment and can add an incredible amount of dynamic feeling to any scene.


This step of the pipeline is why so many people get into 3d graphics in the first place. The art form of animation has become so prevalent with advances in computer graphics over the past few decades that it is quite literally a household topic of conversation. And when you tell most people that you’re a 3d artist you might find that they automatically assume that you mean you’re an animator even if that’s not the case. After all the entire process outlined in this pipeline can simply be summed up in the phrase ‘3d animation’.

A frame from The Blender Institute's open source film Sintel.
A frame from The Blender Institute’s open source film Sintel.

Animation is the process of adding movement to the 3d objects in your scene that are actors in the environment. Most commonly objects that are animated are characters; however, they can also be part of the scenery such as trees swaying in the breeze, the 3d camera, or anything else that requires movement on the screen. Animation is accomplished through the use of simulations, procedural modifiers, as well as objects representing a skeletal system for an object known as an armature.

keyframing in Blender

The most common form of animation is accomplished through the act of setting keyframes at specific intervals throughout the timeline present in a given scene. Any attribute of an object can be keyframed including position, rotation, scale, and countless other attributes. The keyframes act as milestones in time where they mark the significant values of a given attribute and more importantly how those attributes’ values change over time. As these attributes change and are animated in a scene they add to the overall impact of the rhythm, pacing, and mood thereby creating the illusion of life.

Disney's 12 Principles of AnimationThe Illusion of Life CoverThe topic of animation is extremely complex and, like most parts of this pipeline, one’s entire career could be spent devoted to unravelling the secrets to creating believable animation. It’s important to understand that 3d animation was created standing on the shoulders of the art of traditional, hand drawn, 2d animation. And although a third dimension was added to create a new art form the foundational principles of 2d animation are carried into 3d animation. Principles such as those created by the original Walt Disney animators. So if you want to start down a path of learning how to animate in 3d my advice would be to start by learning the history of your craft. Discover the importance of how and why to make things move on the screen. Discover the motivation behind an animated object and you can begin to apply all this knowledge to 3d animation inside Blender. Go out an get yourself a copy of Disney Animation: The Illusion of Life by Frank Thomas and Ollie Johnston. This book alone has taught me enough about animation to keep me practicing for a lifetime.

Disney Animation: The Illusion of Life by Frank Thomas and Ollie Johnston


This is the step in the pipeline where all your efforts pay off. It’s where you get to actually see your shaded and lit models placed in your environment. It’s where you get that stylized look or an ultra realistic image of your scene back from Blender.

Blender render in progress...

Rendering is the process by which Blender takes the data from a scene and attempts to accurately simulate the lighting and shading from the point of view of the camera back to a 2d picture onto the screen. In an animation multiple images are rendered, ordered sequentially, and then played back to create movement on the screen.

The art of rendering is closely tied to every other step of the process because it is the part of the process that actually interprets what all of your objects look like in the scene. So it’s important that you understand that the *way* you model your objects will effect your final render. The *way* you shade and light your scene will effect the final render. And even the *way* you animate actors in your scene can effect the final render. You’ll find that along with these considerations to keep in mind you will also have several render-specific settings you can change which will alter the final look of your scene. Things like the dimensions of your rendered image, the amount of motion blur applied to animated objects, the color correction applied to the render, and small tweaks to optimize the speed at which a scene is rendered. They all play a large roll in how efficient the rendering process is as well as the final overall quality of the rendered image.

In the past Blender used a rendering engine called ‘The Blender Internal Engine’ (BI). The BI engine was a stylistic, scanline rendering engine that allowed for some great effects, but wasn’t based on physically accurate shading and lighting principles. As a result you were forced to engineer methods to fake realistic looking renders and it was very difficult to achieve accurate, real-world lighting.

As you can see the BIE is capable of incredible feats when pushed to its limits. All images borrowed from "Old Guy" by Kamil (maqs) Makowski (Far Left), Gorilla by Everett Gunther (Top Center), 3D Watch by Olivier Amrein (Bottom Center), Cap by Andrea Fiori (waaf) (Top Right), Grapes by Lucas Veber (Middle Right), Sign of the Juggernaught by Derek Watts (Bottom Right)
As you can see the BIE is capable of incredible feats when pushed to its limits. All images borrowed from The Blender Archives.
“Old Guy” by Kamil (maqs) Makowski (Far Left), Gorilla by Everett Gunther (Top Center), 3D Watch by Olivier Amrein (Bottom Center), Cap by Andrea Fiori (waaf) (Top Right), Grapes by Lucas Veber (Middle Right), Sign of the Juggernaught by Derek Watts (Bottom Right).

In recent years a new engine was built from the ground up to allow for more physically accurate shading and lighting called ‘Cycles’ or ‘The Cycles Rendering Engine’. And with the complete backing of the Blender community the BI engine has been quietly retired into the background of the Blender user experience. But if you would still like to experiment with the old BI engine then all you have to do is go up to your render menu at the top of the main interface and switch from the Cycles engine to the BI engine. And although the BI engine is no longer officially updated or supported you may still find that you enjoy using it depending on what kind of project you’re working on. It’s still used by many Blender artists when working on stylistic rendering.

The Cycles Engine was redesigned from the ground up to deliver physically accurate rendering capabilities. All images borrowed from Seoul by Gleb Alexandrov (Far Left), Fiat APOC by Nathanial Ryan (Top Center), A long way to go by Reynante Martinez (Center), Cosmos Laundromat by Blender Institute (Bottom Center), Abandoned teddy bear by Juan Carlos Montes (Top Right), Untitled by Gleb Alexandrov (Middle Right), The magical journey by Harald Tovar (Bottom Right)
The Cycles Engine was redesigned from the ground up to deliver physically accurate rendering capabilities. All images borrowed from The Blender Cycles Gallery.
Seoul by Gleb Alexandrov (Far Left), Fiat APOC by Nathanial Ryan (Top Center), A long way to go by Reynante Martinez (Center), Cosmos Laundromat by Blender Institute (Bottom Center), Abandoned teddy bear by Juan Carlos Montes (Top Right), Untitled by Gleb Alexandrov (Middle Right), The magical journey by Harald Tovar (Bottom Right).

Rendering is an often overlooked part of the pipeline because it tends to be something a lot of artists only think about at the last minute when they want to see the results of their work quickly. Unfortunately, you run the risk of getting very unsatisfying results if you don’t pay attention to how you work your way through the pipeline up to the rendering step. You may find yourself having to go back and redo certain models, materials, or any other of a dozen things simply because you weren’t thinking far enough ahead about how those were going to impact or be impacted by the final render. So a simple bit of advice as you start out is to ask yourself “How is what I’m doing going to work in the final render?”. A good example is narrowing down what you animate in your scene only to those areas that will be seen by the camera. There’s no point in animating a bunch of objects if they are always outside the frame. So just think about things like that while you’re working. It will save you a lot of time early on and allow you to move on to other parts of the pipeline more quickly.

Think about it...Parting Advice

Now that you have an idea of how the 3d production pipeline works you can start to delve deeper into each step of the pipeline and learn more as needed. The entire process is fascinating and to a certain extent constantly morphing and changing as the technology makes strives forward. So my advice is to start with just one topic and learn as much as you possibly can about how things work. Get into the nitty gritty and really get excited about it. And then once you’ve mastered what you feel you need to for your workflow simply move on to another step and continue learning. After a while things will start to come easier, especially when you start to see how everything really is connected. And you’ll also realize that you will never really know all there is to know about any part of the process anyways. So don’t worry about it. Just keep learning, have fun, and go get unleashed.