Robin's Newsletter

I am obsessed - with markets, tech, and the people making moves. In this newsletter, I share pieces of that world. Trends, talks, the heart of venture capital. Join in.

No thanks

Armend Avdijaj

November 3, 2023

Introduction to Armend Avdijaj: From Chestnuts to Data Streams


Armend, can you briefly introduce yourself?


Thank you for the invitation; I'm delighted to participate in this interview. My name is Armend Avdijaj, and I'm currently in my mid-30s, living with my wife and two children. I hold the position of CEO and Co-Founder at GlassFlow, a startup specializing in data streaming infrastructure that I've been developing alongside Ashish Bagri. My journey in the startup world began in 2012 when I joined Mister Spex, and prior to GlassFlow, I had another startup where we created a data tool for influencer marketing. My educational background includes a degree in business administration and a master's in IT management. My experience with my previous company around data infrastructure and my studies in IT management has been the foundation for what I do today with GlassFlow. I have a deep passion for continuous learning, particularly in observing the meaningful impact data can have on every business decision. Especially the movement from the “unknown” to getting more light in the dark by being able to put data in the picture. I feel incredibly fortunate to combine my passion for data solutions with my childhood dream of becoming an entrepreneur.


Early Endeavors


Your first entrepreneurial experience involved buying and selling chestnuts at a Christmas market. With chestnuts readily available nearby, what inspired you to sell them?


Just a few weeks ago, I shared the story that my mother told me with my 2 boys. I was just five years old, and my parents took me to a Christmas market. At the market, they bought chestnuts for me. The following day, I couldn't help but wonder why people were paying for chestnuts when there were trees right in front of our house with plenty of them. So I collected them the next day and sat on a blanket in front of our house waiting for someone to buy them. To my delight, a few kind-hearted ladies passed by, and even though they knew these chestnuts weren't for eating, they handed over real money in exchange and I could buy myself some ice cream.


You've been known for your proactive initiatives, like organizing large school parties and managing sponsors and insurance. What drove these ventures?


The experience I had as a child with the Chestnuts and the excitement that grew with me with the mantra everything is possible drove me to try some ideas during my studies. The first one was during my studies (I was 19), hosting school parties with my flatmate at exclusive locations for 800 to 1000 people. In a matter of weeks, we navigated the entire process, which included conducting surveys, developing a website, officially registering a company, crafting a financial plan, raising funds from both family and friends, securing music acts, and obtaining sponsorships from entities such as banks, gyms, and insurance companies. It was a delightful experience seeing that so many people trust a 19-year-old to run those kinds of events.


Mister Spex Experience


After moving to Berlin and completing your Bachelor's degree, you joined Mister Spex. During this period, before pursuing your Master's, you delved into areas like Marketing Data Warehouse, TV Tracking Solutions, and Customer Journey.


What key insights did you gain about the analytics-driven nature of marketing?


Those years at Mister Spex have been extremely rewarding for me.  In a relatively brief period, I gained extensive knowledge on leveraging data for marketing decision-making. This time allowed me to develop a deeper comprehension of customer centricity and how to bridge the gap between user experience and the available data, all with the overarching objective of creating a mutually beneficial scenario for both users and our company. I give you an example: If you look at typical purchase data, which includes order frequency, product categories, profit margins, and marketing channels, you gain valuable insights, but it only reveals half of the story. What's often overlooked is the user's journey – how they discovered our product, the number of days it took them to make a decision, which product features influenced their buying decision, and the various user personas utilizing our shop in different ways.


How would you contrast the performance-driven marketing of then to today's practices?


There have been a lot of changes in terms of new platforms that arose and more tools are available than ever to optimize every detail of the funnel and marketers must be aware of them. However, the most substantial changes and challenges, in my view, revolve around the emergence of new persona groups, each with distinct expectations regarding your product, communication, and your company as a whole. Additional factors have come into play, such as sustainability, the language you use, response speed, personal branding, and more. This dynamic shift reinforces even more that marketers when looking into performance need to take into account the customer experience.


Sponsokit Journey


You founded Sponsokit, which began as a marketplace. What prompted the shift to a data tool?


In the initial stages of our Sponsokit journey, we achieved significant success. We managed to attract top-tier influencers and facilitated collaborations with brands through our marketplace. We as a founding team have been very data-driven and optimized every step of the user journey, which resulted in very strong growth. However, we eventually encountered a roadblock that left us puzzled. Companies weren't increasing their budgets for our product as we had anticipated. We didn’t understand why companies were not growing their budgets on our product. So we went into research and we learned a good point. These companies were indeed allocating more resources to influencer marketing, but after several rounds of using our product, they began hiring influencer marketing managers. These managers were tasked with involving influencers not just in product placements but in the entirety of the user experience, including activities such as creating limited-edition products, establishing long-term exclusive partnerships, and coordinating events, among others. We understood that is something we won’t do and agencies seemed like a more suitable fit for these objectives. However, the issue of identifying new influencers and managing relationships within spreadsheets became increasingly problematic. To address this challenge, we collaborated with our top clients to develop an influencer marketing data tool. This tool empowered companies to search for influencers worldwide and efficiently manage deal pipelines across their teams.


At its height, 500 active companies were listed on the platform. Do you have any notable or humorous influencer stories to share from this time?


Yes, too many. One I have in mind that made me rethink my journey a lot was the following one. There was a very well-known 18-year-old YouTuber from Cologne (Germany). He began collaborating with top brands through our marketplace and, within three weeks, earned around 30,000 EUR. Then, one day, he unexpectedly showed up at our office with 2 more YouTubers and wanted to talk to me. During our discussion, they proposed that I became their manager and shared their ambitious plan to rent a six-bedroom apartment in Berlin, where they intended to record 24/7 videos with emerging YouTubers. I immediately made clear that is nothing I would like to do and they left. Unfortunately, a few months later I heard he went to rehab and it took him a while to recover. This particular moment in my life left me uncertain about whether my company's activities were truly beneficial for young people or not.


The transition of Sponsorkit to an analytics tool stands out. Why did you decide on this change, and how did it shape the company's trajectory?


Our decision to pivot stemmed from a clear trend we observed in how companies were engaging with influencers Initially, their primary challenges revolved around locating influencers, establishing collaborations, generating reliable reports, and re-engaging with influencers. As companies aimed to scale their influencer marketing efforts, this process became increasingly disorganized. It became clear to us that our product needs to adjust to the new needs of the companies. This decision wasn't without its difficulties, as we already had clients who regularly utilized our marketplace for their monthly spending.  However, we recognized that in the long run, there is no option than building the data infrastructure for these companies.

Consequently, we executed a pivot, creating new software, changing our communication entirely, adjusting how we manage sales and switching from a transaction-based commission to an annual subscription for new clients. This strategic shift proved to be highly beneficial. Within two months, we developed a minimum viable product (MVP), and we promptly began securing our first annual subscriptions with new customers.


In the ever-evolving domain of marketing, how do you strategize the best allocation of


Are resources available for API, Scraping, ML, and AI tools?

Impact of COVID:


The pandemic made you switch your business model from monthly to annual subscriptions. What influenced this decision?


That was clearly the future for us. The development of how companies thought about influencer marketing and how they worked with them was a clear signal to us.


A significant portion of your clientele was from the travel sector. How did COVID-19's disruptions in travel affect Sponsorkit?


The month before COVID started in the EU and travel restrictions were announced we switched to annual subscriptions and signed the first customers. However, we haven't yet had the opportunity to introduce our new product to all of our clients.  This proved to be an incredibly challenging period. Within a matter of weeks, 70% of our revenue was nonexistent. I remember having calls with clients telling me they had been fired or the majority of employees switched to reduced working hours. For clients operating online shops, shipping orders became increasingly difficult due to COVID cases in their warehouses. Due to our monthly contacts we were suffering a lot at that time and raising new money wasn’t an option. Our traction was going down and nobody was investing anymore.


Given challenges like nearing insolvency, you have a resilient spirit. Can you share more about the role resilience has played in your journey?


Like many companies during that period, we found ourselves in survival mode. We began exploring every possible cost-cutting measurement, all while knowing that these measures could only buy us a few more weeks without fundamentally altering our situation. The team was shocked and we knew we needed asap a solution to get everyone back on track mentally. As an initial step, we redirected our focus towards companies benefiting from the imposed restrictions. We began to acquire clients from the gaming industry and mobile app sector. Still, the sales cycles were far too long, making it apparent that this was not a sustainable solution given our circumstances.

Consequently, I leveraged my network and initiated discussions with potential buyers worldwide. During these conversations, I heard very similar stories to ours, but after 2 months I was able to find a tech company that was interested in buying our technology and repurposing it. The process until the deal was signed took longer, but we could finance our operations.


Glassflow Insights


Glassflow, founded in April 2023, marked your foray into data streaming and tackling infrastructure challenges. What drove this initiative?


Two primary reasons motivated me to take this path. My general interest is around data solutions and my personal experience during my time at Sponsokit. At Sponsokit We were running a batch processing infrastructure for a while and it was working well. During the night our processes were running and in the morning our clients had access to their updated dashboards. However, as time progressed, our users' expectations grew. They were asking for data to be available immediately so they could use it for real-time marketing activities such as adjusting cost-per-clicks on advertisements or utilizing dynamic ads. This marked the point at which my previous co-founder and I invested a significant amount of effort in search of an appropriate solution that would allow our small team to offer this option to our clients with minimal hassle. Unfortunately, we couldn't find an existing solution that suited our needs. The available technology was complex, requiring my co-founder to learn a new coding language, delve deep into documentation, modify our existing batch-processing infrastructure, and incur additional IT costs, all without any guarantee of success. As a result, we decided to build our own solution, which initially functioned well for approximately three weeks. However, the issue arose that whenever the data schema at one of our data providers changed, our solution encountered errors. It was quite a frustrating experience not being able to provide what the customer needed because of a technical barrier. Later, Ashish joined Sponsokit and managed to develop a minimal viable product (MVP) for Sponsokit. It is important to note that someone with Ashish's expertise isn't available for everyone. He has +10 years of experience in building real-time applications, can be selective with the companies he wants to work for and his salary isn’t something that every organization can afford. This situation prompted us to convene and engage with data engineers within our network to determine if they were encountering similar challenges.


Data engineers sometimes don't receive the recognition they deserve. Could you illuminate the "Python First" approach and its relevance today?


Unfortunately, in a lot of organizations data engineers are seen as individuals who don't directly contribute value to the business and are spending their entire day putting tools together that nobody understands. In my experience, usually, that happens when data engineers are forced to spend the majority of their time maintaining their data infrastructure or fixing errors in their data pipelines. That’s pretty sad because they have the superpower to enable many use cases in organizations that improve the experience of customers and very often they play a critical role. Just think about real-time recommendations, dashboards, database synchronization, product searches, and many more use cases where data engineers are the ones bringing these experiences to life.


Python as a programming language is already very popular and the go-to language for the majority of data engineers. At the same time, numerous open-source data streaming products are still founded on outdated coding languages, primarily due to the legacy of technologies developed 8 to 10 years ago. The modern data engineer doesn’t want to switch to a more complex code language while a considerable number of individuals are only now entering the field of data engineering, typically by self-teaching Python.  By prioritizing Python, we make data streaming more accessible and empower a broader audience to harness this technology.


With industry giants like Confluent and Aiven and platforms like Kafka, how does Glassflow distinguish itself in addressing pressing challenges?


The team of Confluent are the builders of Kafka and are offering it as a managed service to their clients. They started Kafka at LinkedIn 12 years ago. Kafka is a great technology and has become the standard solution for data teams and enterprises. But during our research we revealed that in “nearly 90% of all use cases it is overengineering to use Kafka” (quote from a data lead). It is written in Java and built for a huge amount of data throughput. Engineers invest considerable time navigating the complexity of setting up, testing, and maintaining Kafka. Very fast the person that knows Kafka becomes a bottleneck in the organizations and the Kafka system tends to be a black box. This no longer aligns with the requirements of modern data teams. Over the past few years, numerous technological advancements have emerged, offering opportunities to simplify the process of data streaming setups and data transformation significantly.


Aiven offers a solution that allows companies to use open-source software as a managed service. These are software like Apache Kafka, Apache Flink and many more. They are solving the needs of companies that already decided to use Kafka for data streaming and are now looking for a way to reduce maintenance efforts. Our relationship with Aiven is particularly close. Heikki Nousiainen, the CTO and founder invested because he understands from firsthand experience (offering Kafka) the needs of modern organizations when it comes to using data streaming. We both share the belief that data teams require a more approachable way of working with data streams than what is presently available, particularly to harness emerging technologies such as AI effectively.


The industry is gradually gravitating towards a unified data stack model. How is Glassflow aligning with this trend?


That’s something we follow very closely. In the last year the term “modern data stack” became very popular. It means the collection of tools and cloud data technologies that are used to collect, process, store and analyze data. What happened is that each component within the modern data stack has, from a technical perspective, evolved into an enterprise-level solution, which is, from the technical perspective great but tough to set up, maintain and very often overengineered for the use cases of the companies.


During conversations with users and team leads we see the challenges that they face when it comes to stitching several solutions together into a data pipeline. Now imagine on top of multiple solutions you have multiple teams depending on those pipelines. It becomes quickly chaotic and organisations are building internal processes across teams on how to manage pipeline changes. Unfortunately, such processes tend to be constrained by a handful of experts who soon become bottlenecks.


Could you explain what Glassflow specifically offers Python developers in terms of enhancing data pipelines?


Our objective is to shift users' attention away from “How do I set up my data infra?” to spending the majority of their time transforming the data based on the needs of their use cases. To accomplish this, we are introducing an automated setup that grows with the pipeline needs of the users. For instance, instead of making decisions in the setup phase about data partitions, our solution makes adjustments on the fly based on the data volume, the number of data producers and consumers involved.

When it comes to transforming the data, usually as a Python developer you would need to spend time deploying your transformations to your infrastructure by setting up an additional service. We are removing that effort for you. With a single command, your functions are getting deployed.


The testing of your pipeline can happen directly through an integration with locust on GlassFlow. This way you as a user don’t need to think about implementing a high-load testing environment through a new tool but can stay on the GlassFlow platform.

Those mentioned points above already save a lot of time for the data engineers and on top they don’t need to learn a new code language and can run all their commands through our CLI in Python.


We are offering a free version to our users which they can use on their local machines to test and play around before deploying to the cloud.


Customer contentment is vital. How does Glassflow nurture client relationships and ensure their satisfaction?


It’s important to us to have a very close relationship with our customers. Frequently, you find yourself thinking about how the product should further develop and what feature has priority. The decisions you need to take have a big impact on your business and we believe that the truth can only be uncovered when customers are actively involved in this process. That’s why we implemented some initiatives. From the very beginning, our company has implemented a structured approach for conducting one-hour interviews with potential GlassFlow customers. These interviews serve to gain a deep understanding of their expectations, challenges, and personas. We record these interviews and encourage our team to listen to them. Periodically, we revisit and assess these interviews, employing them to challenge our decisions. On top of that, there are initiatives on how to measure the happiness of your users through understanding the engagement with your product. Monitoring how much data they process through GlassFlow and how many consumers they connect are relevant metrics for us.


Where do you envision Glassflow in the next five years, and what are your anticipations for market developments?


GlassFlow has the opportunity to accelerate the growth of realized user cases in data streaming and to become the de-facto standard for data streaming pipelines worldwide. This will empower a greater number of data engineers to develop use cases and gain recognition for the tangible value they contribute to organizations.

We mentioned already a few points regarding the market trends that we see. We believe that more products will become all-in-one solutions, lowering the number of tools you need to implement to run your pipelines. We will see that data engineering teams and ML teams will merge more and more into one user group of data streaming increasing the expectation of collaborative streaming pipelines.

Get the

Founder's Pitch

Sign up for Robin's newsletter to access the Founder's Pitch, insights from entrepreneurs, investors, and market trends. Enjoy the best stories from our community.

Get the

Founder's Pitch

Sign up for Robin's newsletter to access the Founder's Pitch, insights from entrepreneurs, investors, and market trends. Enjoy the best stories from our community.