By

Maximillian Green

August 12th 2025

6 minutes

Building Digital Infrastructure for Biotech: A Framework

Building Digital Infrastructure for Biotech: A Framework

For ten years I've wrestled with a simple question: How do we build hardware and software systems that actually help scientists do science?

The answer matters. Companies like Zymergen, Synthego, and Ginkgo Bioworks have shown that the right infrastructure can make research move five to ten times faster. Think about that. An experiment that once took weeks now takes days. A discovery that might have taken a year arrives in months.

Yet most laboratories still work with digital tools that belong in a museum. They lose data in Excel spreadsheets. They email files back and forth until nobody knows which version is current. They repeat experiments because they can't find last month's results.

This is madness, and it's expensive madness. In my experience, a typical lab loses 30 to 50 percent of its productivity to bad infrastructure. That's like having your scientists work three-day weeks.

The Problem with Copying Giants

You might think the solution is obvious: copy what Ginkgo does. But that's like telling a startup to copy Amazon's logistics network. These companies spent hundreds of millions building their systems. You don't have that money, and you don't need their complexity.

What you need are principles. Ways of thinking about your infrastructure that will guide you as you grow.

Five Principles That Matter

Here's what I've learned about building systems that scientists will actually use.

First, protect your data. This sounds obvious until you realize how many labs don't do it. Research data vanishes from laptops. Patient information sits on shared drives. One ransomware attack and years of work disappear.

Build your system with some healthy paranoia. Assume drives will fail, laptops will be stolen, and someone will accidentally delete everything. Have backups. Have backups of your backups. Test your recovery system before you need it.

Second, know who can see what. In biotech, this is the law. Patient data has strict rules. Intellectual property needs protection. Even within your team, not everyone needs access to everything.

Design your permissions from the start. It's much harder to add security after you've built a system than to build it in from the beginning. Think of it like sterile technique: you maintain it always, not just when you remember.

This doesn't mean your data should be siloed from day one, but it does mean you should know why it isn't.

Third, make data findable. A result you can't find might as well not exist. Every piece of data needs a home, a name, and a way to search for it.

This means fighting the natural entropy of research. Scientists want to name files "experiment_final_v2_REAL_final.xlsx." They want to store data wherever is convenient at the moment. Your system must make the right way easier than the wrong way.

Fourth, plan for machines to read your data. Today you're looking at results with your eyes. Tomorrow you'll want to analyze patterns across hundreds of experiments. Next year you'll need to generate regulatory reports. In five years, an AI system might be mining your data for insights you never imagined.

Store your data in formats that computers can read. Use standards when they exist. Document what your fields mean. Your future self will thank you.

Fifth, make everything repeatable. In science, if you can't repeat it, it didn't happen. The same principle applies to your data pipelines.

But here's where startups get it wrong: they try to automate things from day one. That's backwards. The first time you run an analysis, do it manually. Document what you did. The second time, you'll know if it's worth scripting. By the tenth time, you'll be kicking yourself if you haven't automated it.

Track your analysis scripts. Version your protocols. Know what changed and when - and by who's hands. You're building a record of what works. You're learning which processes actually matter to your work. Some things you'll do once and never again. Others will become the backbone of your operation. You can't know which is which until you've lived with them for a while.

Start Small, Think Big

You don't need to build all of this at once. In fact, you shouldn't.

Start with something even simpler than fixing your biggest pain point: get a local server running. I mean it. Buy a decent machine, set it up in a corner of the lab, and learn to use it. Put a database on it. Start backing up your data to it every night.

This sounds boring, but it changes everything. Once you have your own server, you can build small applications without worrying about cloud security. You can experiment with AI tools without sending your data to OpenAI. You can automate tasks without asking IT for permission every time.

Scientists resist this because it feels like a distraction from research. It's the opposite. That local server becomes your laboratory's digital bench. You'll build things on it you never imagined you needed. Start there, get comfortable with it, then tackle your bigger problems.

But as you build, keep these principles in mind. Every quick fix you implement, every tool you buy, every process you create should move you toward a system that is secure, organized, accessible, automated, and repeatable.

The Real Secret

Here's what nobody tells you about biotech infrastructure: the technology is the easy part. The hard part is changing how people work.

Scientists have spent years developing their own systems, even if those systems are just collections of Excel files and notebook pages. They don't trust your new system. They don't want to learn it. They're too busy doing science to think about infrastructure.

You have to prove your system makes their life better. Show them how it saves time today, not in some theoretical future. Celebrate the first person who finds an old result they thought was lost. Make heroes of the people who adopt the system early.

Because in the end, the best infrastructure in the world is worthless if nobody uses it. Build for the scientists you have, not the ones you wish you had. Make the right way the easy way. Then get out of their way and let them do science.

That's how you build infrastructure that matters. With simple principles, steady progress, and relentless focus on helping scientists do what they do best: discover.

If this sounds too complicated, we made this plan just for you: https://aradon.bio/for-startups

We'd love to fall in love with your science — Can you introduce us?

We'd love to fall in love with your science — Can you introduce us?

We'd love to fall in love with your science — Can you introduce us?

By

Maximillian Green

August 12th 2025

6 minutes

Building Digital Infrastructure for Biotech: A Framework

Building Digital Infrastructure for Biotech: A Framework

For ten years I've wrestled with a simple question: How do we build hardware and software systems that actually help scientists do science?

The answer matters. Companies like Zymergen, Synthego, and Ginkgo Bioworks have shown that the right infrastructure can make research move five to ten times faster. Think about that. An experiment that once took weeks now takes days. A discovery that might have taken a year arrives in months.

Yet most laboratories still work with digital tools that belong in a museum. They lose data in Excel spreadsheets. They email files back and forth until nobody knows which version is current. They repeat experiments because they can't find last month's results.

This is madness, and it's expensive madness. In my experience, a typical lab loses 30 to 50 percent of its productivity to bad infrastructure. That's like having your scientists work three-day weeks.

The Problem with Copying Giants

You might think the solution is obvious: copy what Ginkgo does. But that's like telling a startup to copy Amazon's logistics network. These companies spent hundreds of millions building their systems. You don't have that money, and you don't need their complexity.

What you need are principles. Ways of thinking about your infrastructure that will guide you as you grow.

Five Principles That Matter

Here's what I've learned about building systems that scientists will actually use.

First, protect your data. This sounds obvious until you realize how many labs don't do it. Research data vanishes from laptops. Patient information sits on shared drives. One ransomware attack and years of work disappear.

Build your system with some healthy paranoia. Assume drives will fail, laptops will be stolen, and someone will accidentally delete everything. Have backups. Have backups of your backups. Test your recovery system before you need it.

Second, know who can see what. In biotech, this is the law. Patient data has strict rules. Intellectual property needs protection. Even within your team, not everyone needs access to everything.

Design your permissions from the start. It's much harder to add security after you've built a system than to build it in from the beginning. Think of it like sterile technique: you maintain it always, not just when you remember.

This doesn't mean your data should be siloed from day one, but it does mean you should know why it isn't.

Third, make data findable. A result you can't find might as well not exist. Every piece of data needs a home, a name, and a way to search for it.

This means fighting the natural entropy of research. Scientists want to name files "experiment_final_v2_REAL_final.xlsx." They want to store data wherever is convenient at the moment. Your system must make the right way easier than the wrong way.

Fourth, plan for machines to read your data. Today you're looking at results with your eyes. Tomorrow you'll want to analyze patterns across hundreds of experiments. Next year you'll need to generate regulatory reports. In five years, an AI system might be mining your data for insights you never imagined.

Store your data in formats that computers can read. Use standards when they exist. Document what your fields mean. Your future self will thank you.

Fifth, make everything repeatable. In science, if you can't repeat it, it didn't happen. The same principle applies to your data pipelines.

But here's where startups get it wrong: they try to automate things from day one. That's backwards. The first time you run an analysis, do it manually. Document what you did. The second time, you'll know if it's worth scripting. By the tenth time, you'll be kicking yourself if you haven't automated it.

Track your analysis scripts. Version your protocols. Know what changed and when - and by who's hands. You're building a record of what works. You're learning which processes actually matter to your work. Some things you'll do once and never again. Others will become the backbone of your operation. You can't know which is which until you've lived with them for a while.

Start Small, Think Big

You don't need to build all of this at once. In fact, you shouldn't.

Start with something even simpler than fixing your biggest pain point: get a local server running. I mean it. Buy a decent machine, set it up in a corner of the lab, and learn to use it. Put a database on it. Start backing up your data to it every night.

This sounds boring, but it changes everything. Once you have your own server, you can build small applications without worrying about cloud security. You can experiment with AI tools without sending your data to OpenAI. You can automate tasks without asking IT for permission every time.

Scientists resist this because it feels like a distraction from research. It's the opposite. That local server becomes your laboratory's digital bench. You'll build things on it you never imagined you needed. Start there, get comfortable with it, then tackle your bigger problems.

But as you build, keep these principles in mind. Every quick fix you implement, every tool you buy, every process you create should move you toward a system that is secure, organized, accessible, automated, and repeatable.

The Real Secret

Here's what nobody tells you about biotech infrastructure: the technology is the easy part. The hard part is changing how people work.

Scientists have spent years developing their own systems, even if those systems are just collections of Excel files and notebook pages. They don't trust your new system. They don't want to learn it. They're too busy doing science to think about infrastructure.

You have to prove your system makes their life better. Show them how it saves time today, not in some theoretical future. Celebrate the first person who finds an old result they thought was lost. Make heroes of the people who adopt the system early.

Because in the end, the best infrastructure in the world is worthless if nobody uses it. Build for the scientists you have, not the ones you wish you had. Make the right way the easy way. Then get out of their way and let them do science.

That's how you build infrastructure that matters. With simple principles, steady progress, and relentless focus on helping scientists do what they do best: discover.

If this sounds too complicated, we made this plan just for you: https://aradon.bio/for-startups

We'd love to fall in love with your science — Can you introduce us?

We'd love to fall in love with your science — Can you introduce us?

We'd love to fall in love with your science — Can you introduce us?

By

Maximillian Green

August 12th 2025

6 minutes

Building Digital Infrastructure for Biotech: A Framework

Building Digital Infrastructure for Biotech: A Framework

For ten years I've wrestled with a simple question: How do we build hardware and software systems that actually help scientists do science?

The answer matters. Companies like Zymergen, Synthego, and Ginkgo Bioworks have shown that the right infrastructure can make research move five to ten times faster. Think about that. An experiment that once took weeks now takes days. A discovery that might have taken a year arrives in months.

Yet most laboratories still work with digital tools that belong in a museum. They lose data in Excel spreadsheets. They email files back and forth until nobody knows which version is current. They repeat experiments because they can't find last month's results.

This is madness, and it's expensive madness. In my experience, a typical lab loses 30 to 50 percent of its productivity to bad infrastructure. That's like having your scientists work three-day weeks.

The Problem with Copying Giants

You might think the solution is obvious: copy what Ginkgo does. But that's like telling a startup to copy Amazon's logistics network. These companies spent hundreds of millions building their systems. You don't have that money, and you don't need their complexity.

What you need are principles. Ways of thinking about your infrastructure that will guide you as you grow.

Five Principles That Matter

Here's what I've learned about building systems that scientists will actually use.

First, protect your data. This sounds obvious until you realize how many labs don't do it. Research data vanishes from laptops. Patient information sits on shared drives. One ransomware attack and years of work disappear.

Build your system with some healthy paranoia. Assume drives will fail, laptops will be stolen, and someone will accidentally delete everything. Have backups. Have backups of your backups. Test your recovery system before you need it.

Second, know who can see what. In biotech, this is the law. Patient data has strict rules. Intellectual property needs protection. Even within your team, not everyone needs access to everything.

Design your permissions from the start. It's much harder to add security after you've built a system than to build it in from the beginning. Think of it like sterile technique: you maintain it always, not just when you remember.

This doesn't mean your data should be siloed from day one, but it does mean you should know why it isn't.

Third, make data findable. A result you can't find might as well not exist. Every piece of data needs a home, a name, and a way to search for it.

This means fighting the natural entropy of research. Scientists want to name files "experiment_final_v2_REAL_final.xlsx." They want to store data wherever is convenient at the moment. Your system must make the right way easier than the wrong way.

Fourth, plan for machines to read your data. Today you're looking at results with your eyes. Tomorrow you'll want to analyze patterns across hundreds of experiments. Next year you'll need to generate regulatory reports. In five years, an AI system might be mining your data for insights you never imagined.

Store your data in formats that computers can read. Use standards when they exist. Document what your fields mean. Your future self will thank you.

Fifth, make everything repeatable. In science, if you can't repeat it, it didn't happen. The same principle applies to your data pipelines.

But here's where startups get it wrong: they try to automate things from day one. That's backwards. The first time you run an analysis, do it manually. Document what you did. The second time, you'll know if it's worth scripting. By the tenth time, you'll be kicking yourself if you haven't automated it.

Track your analysis scripts. Version your protocols. Know what changed and when - and by who's hands. You're building a record of what works. You're learning which processes actually matter to your work. Some things you'll do once and never again. Others will become the backbone of your operation. You can't know which is which until you've lived with them for a while.

Start Small, Think Big

You don't need to build all of this at once. In fact, you shouldn't.

Start with something even simpler than fixing your biggest pain point: get a local server running. I mean it. Buy a decent machine, set it up in a corner of the lab, and learn to use it. Put a database on it. Start backing up your data to it every night.

This sounds boring, but it changes everything. Once you have your own server, you can build small applications without worrying about cloud security. You can experiment with AI tools without sending your data to OpenAI. You can automate tasks without asking IT for permission every time.

Scientists resist this because it feels like a distraction from research. It's the opposite. That local server becomes your laboratory's digital bench. You'll build things on it you never imagined you needed. Start there, get comfortable with it, then tackle your bigger problems.

But as you build, keep these principles in mind. Every quick fix you implement, every tool you buy, every process you create should move you toward a system that is secure, organized, accessible, automated, and repeatable.

The Real Secret

Here's what nobody tells you about biotech infrastructure: the technology is the easy part. The hard part is changing how people work.

Scientists have spent years developing their own systems, even if those systems are just collections of Excel files and notebook pages. They don't trust your new system. They don't want to learn it. They're too busy doing science to think about infrastructure.

You have to prove your system makes their life better. Show them how it saves time today, not in some theoretical future. Celebrate the first person who finds an old result they thought was lost. Make heroes of the people who adopt the system early.

Because in the end, the best infrastructure in the world is worthless if nobody uses it. Build for the scientists you have, not the ones you wish you had. Make the right way the easy way. Then get out of their way and let them do science.

That's how you build infrastructure that matters. With simple principles, steady progress, and relentless focus on helping scientists do what they do best: discover.

If this sounds too complicated, we made this plan just for you: https://aradon.bio/for-startups

We'd love to fall in love with your science — Can you introduce us?

We'd love to fall in love with your science — Can you introduce us?

We'd love to fall in love with your science — Can you introduce us?