Brief history of Dehancer

Lately we’ve been asked a lot about the story behind Dehancer. So we thought that our CEO, Pavel Kosenko, would be the best story teller.

Background

The project, which today bears the name Dehancer, was born in the fall of 2014. By that time, I had 30 years of film experience and over 10 years of experience in digital photography. I studied modern technologies and generously shared my experience: published articles and video tutorials on photography, did lecturing.

In 2013, my “ Lifelike ” book was published – the biggest fruit of my three-year study of digital technologies and related issues. I managed to bridge the gap between the technical and aesthetic aspects of color photography. Apart from the theory, I also included practical solutions to digital photography issues.

The origins of the idea

At that time I traveled a lot and shot mainly with digital cameras. I would bring thousands of photographs from my trips and would spent a lot of time selecting and processing. However, over 10 years I learned to do it quickly and at some point I realized that the processing of most images comes down to the specific and, almost always, one algorithm – setting the black and white points, adjusting the white balance, increasing contrast and lowering the saturation.

You might ask: “How come?” After all, all photos are different, and you need to process them in different ways. And although there’s some truth to this, as you gain experience, you come up with a clear algorithm of actions that quickly delivers the desired results. We soon realize that it’s better to spend more time shooting than processing.

A bad photo is often easier to reshoot than to fix in an editor. On the other hand, the processing of an initially good shot usually comes down to the final set of technical solutions – digital “development” and a bit of polishing. Ultimately, this frees up more time for photography.

Besides, I noticed that the experienced photographers I know (many of them with a worldwide reputation), often processed images in a similar manner. Sometimes they slightly changed or supplemented the sequence of operations, while maintaining the general principles.

That was the point when I decided to automate a typical processing.

First there was Action

In October 2014, I created a small Photoshop action that reproduced typical processing operations. As a joke, I called it “The Worseizer 1.0” (“ukhudshaizer” in Russian, means a tool that makes something worse than it was) and published it in the public domain for free.

Technically, the action made the photos much worse – it distorted colors, clipped shadows, lowered saturation. However, from the point of view of an experienced photographer, the pictures became better aesthetically (although not always).

Perception depends on experience, and in the case with visual arts – on visual experience. Therefore, the title of the action hides a funny trick – the ability to honestly, yet with a grain of irony, respond to the reviews of the novice photographers. They would say: “It was better before the processing!” , and I would reply: “That’s right, because you used “The Worseizer”, and not “The Improver.”

Oddly enough, my comic Photoshop action instantly became popular. In a few months it was downloaded more than 200 thousand times. I received a lot of positive feedback and ideas for improving it. So on the New Year’s Eve, I published an updated version – “The Worseizer 2.0”, where I added some options for an accurate toning, as errors in the auto-correction of white balance were a common problem.

First iPhone app

Soon the limitations of Photoshop started to slow down my ideas. Looking for a more serious “computing platform”, I settled on iPhone, as by then it had become my main digital camera.

I am not a programmer, so I decided to look for someone on Facebook. I considered two schemes of work – a permanent partnership and a one-time job. At the same time, I was ready to invest my own money in this interesting experiment.

Dima Kuznetsov, my friend and partner in joint projects, also supported me. Being not only an entrepreneur, but also a photographer, Dima got very excited about the idea and was ready to take on formal issues, and this gave me some space and time for a creative work on the product.

Common acquaintances brought us to Den Svinarchuk, a programmer who then worked in one of the biggest Russian IT corporations.

We thought that one of the top developers would probably ignore our crazy ideas. However, out of courtesy, Den still asked for a brief abstract, although he didn’t have any spare time for the project. I was perfectly aware that it did not make sense, but just in case, I sent a letter and immediately forgot about it, being sure that I would never get an answer.

In the meantime, a wonderful iOS developer from Singapore, Dima Klimkin, responded to my call. He was encouraged by his wife Anya, who, having attended my photography workshop in Vietnam, was imbued with my ideas and views.

We had a call, discussed the future app and our work plan, agreed on financial aspects. Money was not the goal — first of all, we wanted to make a tool for ourselves. But, understanding the volume of work and the amount of future investments, we saw it as a token of gratitude from the future users.

As soon as we finished our conversation, I received an email from Den. It was short: “Dude, I looked at your idea, I couldn’t sleep for three nights, here’s an instruction on how to install the application on iPhone.”

Of course it was a technical prototype, but it worked in real time! The user simply points his iPhone camera and sees an already processed image. Instead of an app for processing files, it turned out to be something even better – an “aesthetic viewfinder”.

Of course I was absolutely delighted. But it was necessary to agree on the joint work of the two cool programmers.

Den, being completely captivated by the idea, was ready to become a participant in the project free of charge, out of pure excitement. He said: “What Pasha wants to do is impossible to do. That’s why I want to do it! Besides, I myself shot a lot on film, and I do not like what one gets on digital. These aesthetic views hit me close to home. I want to have fun with you! ”

This is how the project got four equal co-owners. The work on the application was in full swing.

Degradr

The comic name “The Worseizer” quite accurately reflected the aesthetic essence of the project and sounded self-ironic, which was quite satisfactory for all the participants. But if you really wanna be a troll, then do it more sophisticated.

We’d been brainstorming for a long time about an appropriate name, and settled on the “Degradr” option (obeying fashion trends, the word “Degrader” lost the letter “e” in the last syllable).

Back then it never occurred to us that this name evokes unpleasant associations among Russian speakers and is not suitable for a creative tool.

However, initially the app bore that name and appeared in the App Store in June 2015. It did about the same thing as the Photoshop Action, but with the more intelligent algorithms, and most importantly, in real time.

“Masterpiece” button

The key idea of ​​the project at that time was the “Masterpiece” button. This is what the red circle in the center of our first logo symbolized. We even chose the slogan of George Eastman, founder of Kodak: “You push the button, we do the rest.”

In Degradr, the user could not control the processing – it was completely automatic. But it was possible to influence the processing with the help of composition. For example, to include an object of a certain color in a frame or, on the contrary, to bring it out of the frame. Different angles and layouts led to different results. A more thoughtful composition of an image produced a more expressive color and contrast. In a way, the app taught to shoot.

It is possible that in the future we will return to this interesting idea. But at that time it was implemented rather roughly. Hence the application gained only 30,000 installs (very modest by the standards of the App Store), and we earned about $ 1,000 on it through an in-app purchase with expanded features.

But, as they say, a developer who is not ashamed of his first project is a bad developer. It was a very valuable experience and an important stage in the development.

Failed Degradr 2

Almost immediately after the release of Degradr, we started developing a new application. This time we decided to approach it more thoroughly – completely rethink the mathematical model, improve algorithms, improve usability and design, and significantly expand the functionality.

We realized that just fixing individual digital images wasn’t enough to get good color. It was necessary to turn to the aesthetic developments of mankind in the field of film photography. Besides, we already had the film experience – both aesthetically and technically.

However, to bring the analogue techniques to the digital platform, the film experience and aesthetic training alone were not enough. It turned out that the existing solutions are also far from perfect, and, most often, they have nothing to do with real films. And since there were no high-quality and plausible methods for film imitation, they had to be invented from scratch.

From that moment on, we plunged into the long-term (as it turned out later) research and experiments. We hoped to develop a completely new sampling technology in a few months. However, each new experiment revealed yet another layer of problems and solutions, which multiplied exponentially.

At the same time, we started looking for a qualified designer who would take over the work on the appearance of the future app. In the fall of 2015, Dima Novak, a photographer, a researcher of photographic processes, and a designer with extensive experience, offered his help. Over the many years of our acquaintance, we have broken a lance with him on the matters of photography and processing many times. And finally, the stars aligned for us to embark on a joint project.

Dima was ready to work for an idea, he wanted to take part in creating an app for himself , and in general he wanted to be in a team of people interesting to him, united by a similar idea. We liked it, of course, too.

After the first digital sketches, it became clear that this was a whole new level. Dima did not just draw, but also worked out in detail the logic of the interface’s behavior. In addition, as a photographer and a researcher, he also joined our scientific research.

With the arrival of the third Dima, our team grew up to five people.

Degradr 2 becomes Dehancer

At first, the new application was called Degradr 2, but we knew that this name was temporary. In December 2015, I again resorted to “Ask the Audience” option and wrote a post on LiveJournal asking for help in coming up with a name. We received over 100 comments with lots of unexpected ideas – Degradizer, Decolouriser, AntiEnhancer, etc.

In the end, one of the readers suggested D-Enhancer, while the other corrected it a bit. The option seemed cool, and we took note of it.

This time we turned to our English-speaking acquaintances to hear from them. The idea of ​​the name was not clear to everyone without additional explanations, but the main thing is that it did not evoke negative associations. It was also important for us that it was a non-existent, made-up word (like Kodak).

This is how the project got a new name. It is noteworthy that it was the result of the collective creativity of our team and my readers who followed the project and sympathized with it. We are grateful to everyone who took part in that brainstorm. Nobody took the task seriously – this is probably why everything worked out.

Five years of research

The rapid development of the project in the first year gave way to a five-year lull. For a lot of people, Dehancer project has practically ceased to exist. During this time, we only released a test Dehancer Desktop application for macOS, the preliminary alpha version of which did not live long.

In fact, the events developed very actively, but were not broadcasted to a wide audience.

We experimented, tested hypotheses, invented algorithms to technically reproduce aesthetic solutions. The works of Robert Hunt were particularly helpful, as he headed the Kodak research laboratory, and even wrote the textbooks on cognitive psychology. We ourselves wrote articles about perception, about our methods and algorithms.

We did chemistry in the literal and figurative senses – endlessly shot color targets on film, developed, scanned, printed them optically on black-and-white and color photographic paper, developed our own engineering software for a profile building, learned to work with spectrophotometers and other precision equipment.

An important step was moving our research into a darkroom. Most profile builders scan films with reliance on a software processing. On the contrary, we have excluded film scanners from the technological chain and switched to direct optical printing on analog media. This is the only way to unambiguously interpret a negative and get the real film colors, as the manufacturer intended. (Read more about this in the article “ How we build film profiles ”).

SREDA Film Lab has become a technological base for our research. It is the largest film laboratory in the territory of the former USSR. This is our own project, which appeared a little earlier – and initially, for the realization of our personal photographic interests.

When Dehancer’s research entered its active phase, we already had a technological base. We knew how to develop films using both standard and exotic processes, we could scan them and print them optically, both on black and white and on color photo paper.

Today, it is much more difficult to organize color optical printing in terms of equipment and experience. But there was an experienced printmaker with forty years of experience and dozens of other specialists working with film.

In the spring of 2017, our know-how began to take shape. That spring, we released a free alpha version of the desktop application to test the first film profiles with real users. Although the app was not promoted, it got over 5,000 users that had processed around 300,000 photos over the years.

We received the necessary feedback, saw the flaws in the technology and continued to improve it. It took another 3 years.

Turning point

In the fall of 2018, it became clear that the time had come to devote ourselves to Dehancer fully. By that time, each of us was already working on the project on a daily basis. But I had to earn my living in other ways.

As the Chief Development Officer and CTO, Den made the decision to make Dehancer his main occupation. This made it possible to multiply the pace of development, although it required financial investments from the co-founders of the project.

DaVinci Resolve

In the spring of 2019, the Dehancer technology core could already become the basis of a real product.

Having temporarily switched to the Desktop version, we planned to return to iOS. But come back not just with the world’s best film profiles, but also the best film simulation toolkit in general. And this is not only color, but also grain, as well as other features and “effects” of an analog image.

We set the bar high for ourselves and wanted to make a tool for the skilled and demanding user. We found such specialists in the movie industry.

So the idea was to start conquering the world with an OFX plug-in for DaVinci Resolve. We plunged into the work on our first commercial product.

Having fallen in love with DaVinci Resolve as the “habitat” of the future plug-in, we began to shoot video and practice coloring it. We got acquainted with technologies and colorists, visited colouring studios and film sets. In other words, we actively plunged into the world of cinema.

Dehancer OFX 1.0

Release of the first version of the OFX plug-in took place on January 15, 2020, after almost a year of working on it.

In the first month and a half, we did not sell a single license. This did not frighten us – it took time to popularize the product, to gain the authority and trust of users. So we started preparing long reads and some videos about the product.

Then we learned about our competitors and their advantages. For example, the first version of Dehancer only worked with color, while competitors also had grain. So the Film Grain tool was the next major milestone in the plug-in development.

In fact, there was a pretty good grain in Degradr. But we took a new approach and brought the algorithms almost to perfection. We managed to make a generated grain much more authentic and natural than a scanned one. Even the highest quality grain scans have nothing to do with an image and hence, always look superimposed. Grain in Dehancer is based on an image analysis and a micro-contrast map, from which the grain is always generated in a right place. We also added a random shift and a 3D rotation of the grain granules, which also can be combined into clusters.

Dehancer today

The sales were not long in coming. Since then, they have been growing steadily as the new tools are added and updates are released.

We keep developing the OFX plug-in for DaVinci Resolve on macOS, Windows and Linux. Step by step we released the plugins for Final Cut Pro, Adobe Premiere, After Effects, for Photoshop and Lightroom. Also the iOS and the Online version of Dehancer have been introduced.

We have many ideas for new analog-based imaging tools and related services.

Today Dehancer team consists of 35 people. We live and work in different cities and even countries – Antalya, Barcelona, Dubai, Kiev, Ljubljana, London, Minsk, Moscow, Tallinn, Tbilisi, Tokio, Sydney, Vilnius, Warsaw and Yerevan.

You can get to know us on a dedicated Team page .

I hope this story will continue, and after a while I will definitely write the second part.