by Corin Wagen · Nov 18, 2025
This week, I interviewed Navvye Anand, co-founder of Bindwell. Founded last year by Anand and Tyler Rose, Bindwell just announced a $6M seed round in support of their mission to discover better pesticides with AI.
Anand was happy to speak with Rowan and share his perspective on pesticide discovery, how machine learning gives them a unique advantage, and what he's excited about right now in AI research.
Corin Wagen: I'm here with you, Navvye, right after you raised a $6M seed round. Do you mind telling readers what Bindwell is and what you guys do?
Navvye Anand: I'm the founder of Bindwell. We're trying to build better pesticides using state-of-the-art AI models. Our first pesticide is targeted towards this insect called fall armyworm, which does nearly 10 billion dollars worth of crop damage in Africa alone. So we're using and training foundational AI models to build better pesticides for infectious pests like fall armyworm.
Corin: You mentioned you're building and training AI models to do this. Can you tell us a little bit more about what the R&D cycle looks like for pesticides and how this all works operationally?
Navvye: Sure. So the R&D cycle is sort of similar to the R&D cycle in pharma. You have your in vitro tests, then you have your in vivo tests. After you have your in vivo tests and efficacy studies, you can move on to the greenhouse. And after you go to the greenhouse, you can move on to field trials. Once a compound is in field trials, it's the equivalent of a drug being in clinical trials. and both have a similar amount of capital as a relative percentage (of the overall R&D cost) spent on them. That's where the meat of the money is made.
We're training foundational models to generate small molecules and peptides and predict their properties such as binding to a specific target, mitigating off-target effects, predicting toxicity, all of that stuff. And we're focused on pesticides because we think it's this incredibly important component of the global food supply chain right now and no one's really paying attention to it.
Like you hear about, cool cancer research all the time, but when was the last time you heard about someone developing a pesticide? There have been fewer than 25 new pesticides in the last decade, and that was down from 40 the decade before that. So, it's been this downward slope of fewer and fewer active ingredients being discovered. And we're here to fix that problem.
Corin: Why do you think pesticides haven't attracted the same attention that, as you mentioned, cancer or Alzheimer's or aging have? What's the difference there?
Navvye: I think it's just less sexy. Curing cancer is definitely a sexier topic that can rightfully attract a lot of funding. In addition to this, I think a lot of talented biotech people gravitate towards pharma because licensing deals over there tend to be a bit bigger. On average, it costs roughly $300 million to get a pesticide approved by the EPA, while the entire R&D process for drugs is like $1–1.5 billion. So it's really three times more lucrative per se.
Corin: As we move into this world of "AI designing biology," understanding biomolecular interactions, and training ML models on this—how does this all directly translate over into pesticides? I know your team has done a lot of work in this space. Do you feel like you have to rethink how you approach these problems for pesticides, or is it pretty much the same problems?
Navvye: I think the underlying biochemistry is really similar and you can basically plug and play a lot of these different models that exist out there. Chai, Chai-2, BoltzGen, OpenFold: all of these models are incredibly useful for the initial stage—modeling targets, maybe finding binders—but that's a small part of the entire pipeline.
In our minds, your models are only as good as the proprietary data you own. That's why we have a wet-lab setup where we can conduct quite high throughput assays and feed that data directly into our models and then train our models with that.
In fact we just got two nanomolar binding hits. This was not a structure-generated model; it was just a sequence generated by Atomwell [Bindwell's all-atom foundational model] and we got two hits. The first one was 1.2 nM and the other one is slightly higher, 17 nM, and then just a couple days ago we got a picomolar binding-affinity hit as well.
Corin: Congrats!
Navvye: Thank you. These are the things that feel so good.
Corin: Yeah, that's terrific news. I've been on the other side of finally getting down to low nM on a series before and it feels really good.
Navvye: Yeah, it's really euphoric when you find a hit.
Corin: Thinking about the analogies to traditional human drug discovery: there, we think a lot about solubility, permeability, on-target and off-target effects, toxicity, and bioavailability. How do these considerations differ when we talk about pesticides? In what respects is the design space similar and in what respects is the design space different?
Navvye: Yeah, I think the design space is quite similar in the fact that you have this multi-objective optimization problem where you know there are a bunch of different things that you want either your biological or small molecule to have and sometimes they're directly competing with each other.
For example, one of our in silico peptide hits was very difficult to express and we just couldn't express it. So in the future what we're going to do is have this head that predicts expressibility in either Sf9 or E. coli and you have this landscape of multi-objective optimization problems where there's a Pareto frontier. You can't increase something without sacrificing the quality of something else. So in that sense it's really similar to drug discovery and pharma. You know, you care a lot about solubility, you care a lot about toxicity, you want to mitigate off-target effects.
Where it differs, I think, is just in the volume of animals and species that you care about. You care quite a bit about bee toxicity, for example, and you don't really care about that in pharma. You care quite a bit about earthworms, and then you care about the interactions of soil with your pesticide. That is something that is just completely not present in pharma. That being said, you know, ADMET is just incredibly difficult to model in complex human organisms compared to one small 5-gram insect. But nonetheless, it's difficult for both.
Corin: Can I double-click on the soil-interaction comment there? Would you be willing to say a little bit more about this?
Navvye: Yeah. For example, let's say you spray a pesticide on a leaf. When you spray a pesticide on a leaf naturally some of it is going to drip down into the soil. So you want to model the effects of the pesticide in the soil and you want to make sure that it doesn't degrade the soil to a huge extent.
Corin: Can I ask: how do you guys feel that AI gives you a unique advantage here? Many people have worked on pesticides before. Where do you think the big Bindwell difference is going to be seen? Number of candidates, quality of candidates, the number of interactions you can take into account, all of the above? Give us a glimpse of the future here.
Navvye: I think it's three-tiered.
The first one is really target-specific discovery. We believe that we found the first binders for our insect that were really target-specific. The way it usually works in agriculture is that you hire chemists and biologists to deal with molecules that have insecticidal properties and then you just do the insect assay. You inject the small molecule into the insect and then you measure the LC50 or LD50 and then you reverse engineer the mechanism after you get a hit.
We're trying to make sure that we're able to mitigate off-target effects by finding targets in the insect that are unique to it so that it doesn't interfere with anything that humans have. So by focusing on targets that are unique to the insect that you have, you inherently minimize the risk of off-target effects. With structure-prediction tools, you can do things like that pretty nicely.
The second part is, like you mentioned, the quantity. If you view this perfect pesticide as having X properties, you can view this as a search-space problem. And because AI models are pretty good at predicting the properties given enough on-target data, we can screen billions of compounds and get the perfect candidate given X amount of properties. That's faster and better than MD [molecular dynamics] simulations in our experience, and I truly believe that it's more "bitter lesson" than some of the manual approaches that people have been using for a long time.
And the third is quality. It goes hand in hand: when you're able to model a lot of the different interactions by just adding different heads to your pre-trained model and using multitask learning effectively, you're able to model a lot of the different problems much more easily. And as a result, you're able to use data that would take a lot of pre-processing and cleaning for someone else to model and just feed it into your transformer with different heads and take advantage of multitask learning, which has worked so well in natural-language processing and bio, and just transpose that over to the pesticide problems.
Corin: Yeah. And I suppose there's also some sense in which you have scarce data that maybe you wouldn't be able to learn from scratch, but you're able to more effectively learn it in the context of the unified pesticide discovery program and all the different modalities of data that you have.
Navvye: Yeah, precisely. In fact, let me give you an example of this. We were modeling protein–protein interactions and there's like maybe 20,000 protein protein interactions with Kd values out there. And there are 1.8 million protein–ligand pairs. So we ran this ablation study where we trained the model with only protein–protein pairs and the results were way worse than when we trained the model with protein–ligand and protein–protein pairs together. So multitask learning is definitely effective when you have less of one modality and you can force it to learn similar interactions from another modality (in this case, protein–ligand interactions).
Corin: That's a really cool anecdote. Thanks for sharing that. This is a slight tangent, I suppose, away from pesticides, but I'd love for you to share some thoughts on structure- versus sequence-based modeling, particularly in an affinity context. Where do you see structure being useful? Where do you think sequence-only models are useful and how do you see this evolving, looking at all the data-generation efforts ongoing?
Navvye: Yeah, this is a very interesting question. Models like Boltz-2 and Chai-2, they're really good at coming up with binders and I think that's sort of coincidental. Adaptyv did this really cool study when they found binders, they looked at iPTM scores and confidence metrics from AlphaFold 2 and all of these different structure-prediction models and they tried to correlate that with affinity and there was basically no correlation.
So my thought is that you can't really have good affinity without a good structure first. But a good structure is not in and of itself a good metric for affinity, which is what you actually care about. I think a lot of these hits are coincidental and adding an affinity regressor to guide this process of optimizing proteins is definitely what we think is important.
We think structure is definitely very important. I just don't think it correlates that well with affinity. And for affinity, sequence-only models are just easier to train at scale. For example, you can train a sequence-only encoder model on billion-parameter scale, but I've yet to see a graph-transformer-based model trained at that scale.
Corin: What's the area of research inside or outside Bindwell that you're most excited about right now? It doesn't have to be something you're working on.
Navvye: There was this really cool paper that came out about protein test-time training. What the authors do is bring the test-time-training paradigm from NLP [natural language processing] over to bio. They have this pre-trained ESM model and, given just one protein, they unfreeze the ESM encoder and train it for let's say 40 epochs on minimizing the perplexity of that amino-acid sequence. The idea is that downstream tasks are directly correlated to better representations from the encoder, and I think that's really interesting.
I think if that's really the case, which they show in their paper, then you can more or less reach the same performance with a smaller encoder model using test-time training as you would with ESM 3B (frozen) for example. And I think that's really interesting for higher throughput screening of protein interactions, protein–protein interactions, protein–ligand interactions, and so on.
And the second thing that I'm really excited about is diffusion language models. I think they're absolutely incredible. We trained our own diffusion language model a couple of weeks ago and got pretty good results compared to just the AR [autoregressive] paradigm. And I think the next wave is definitely going to be diffusion and reasoning on diffusion. So that would be really interesting to see.
Corin: Are you excited about diffusion language models because you want to do diffusion PLMs [protein language models], or because it's going to be awesome to get really fast code internally?
Navvye: Both, actually. I think diffusion PLMs are definitely interesting because a lot of the time you have to in-paint a sequence given context. Things like that are definitely interesting for protein language models and chemical language models but, more importantly, I think the speed at which we can generate models is definitely interesting. I think Gemini Diffusion reaches something like 1,438 tokens per second which is absolutely incredible.
I'm just excited to see how diffusion compares to auto-regressive models, especially in data-scarce settings. A lot of high-quality data is inherently scarce, and diffusion models are provably better than auto-regressive models when it comes to scarce data. So seeing how we can extend our RL [reinforcement learning] understandings to diffusion models will be pretty interesting as well.
Corin: Last question. What can we expect to see from Bindwell over the next couple years, and where do you hope to be in five years from now?
Navvye: I think five years from now: we have a pesticide in the market that's selling pretty well and we've expanded quite a bit and vertically integrated. The end vision of Bindwell, I think, is to automate molecular discovery in and of itself which is a very hard problem to do: it requires trillions of dollars of effort across the globe.
It's a lofty goal and hopefully in five years we'll be closer to achieving it or would have actually achieved it. But I think I'd be pretty content if we have a pesticide in the market that's selling like hotcakes. In addition to that, we plan to set up a lab with higher throughput than the one we have right now. So automated lab testing is something we're really looking forward to.
Corin: Awesome. Thank you for sharing, Navvye. Congrats again on the seed and looking forward to following your journey.
