Matt Beecher: Hey, everyone, Matt Beecher here, CEO of Neocova and welcome to the MongoDB podcast.
Mike Lynn: To keep pace with market demands as well as banking and financial industry trends, banks have to lean into data. They have to leverage analytics to maximize the value of the information they already have at their disposal. My guests today are help helping community and regional banks do just that. Matt Beecher and Matt Almeida of Neocova join me today to talk about how Neocova is leveraging MongoDB to capture, process, identify key trends in their customers' data and help them make better decisions. MongoDB's biggest user conference ever is coming to New York city, June 7th through the 9th. Visit mongodb. com/ world- 2022. Use the code podcast when you register for 25% off your tickets and some special podcast swag.
Matt Beecher: Hey everyone, Matt Beecher here, CEO of Neocova and welcome to the MongoDB podcast.
Mike Lynn: Well, welcome to the show today. We're going to be talking with some folks from Neocova, Matt Beecher and Matt Almeida. Matt Beecher, welcome to the show. It's great to have you on the podcast.
Matt Beecher: Thank you and thanks for having us.
Mike Lynn: And Matt Almeida, welcome.
Matt Almeida: Thank you. Incredibly excited to be here.
Mike Lynn: If we could do some introductions. Matt Beecher, why don't you go first? Tell the folks who you are and what you do.
Matt Beecher: Sure. I'm CEO of Neocova and that sort of means I do everything, and try to herd cats along. But from a background perspective, I've been a FinTech guy since the late 90s when we called it e- finance. So I've been around and seen a lot of things.
Mike Lynn: Well, welcome once again to the show. Matt Almeida, welcome. Tell the folks who you are and what you do.
Matt Almeida: Hey, I'm Matt Almeida. I'm the vice- president of engineering for Neocova. Just like Matt Beecher, herds cats. I do the same for our technology. And so, I've had a background that's heavy in data and distributed architecture. So I wear a bit of a generalist hat, pop around and see what interesting solutions we can whip up.
Mike Lynn: Today we're going to be talking about Neocova. I think it might be helpful, Matt Beecher, if you could introduce us to who Neocova is as a company. What is it that you do?
Matt Beecher: Sure, I'd be delighted to. We are a banking technology startup. If we stop there that doesn't say anything at all, because there's a lot of folks out in this space. But specifically, we're focused on helping banks better utilize their largest asset, which is data, and leveraging that to drive better outcomes. Better outcomes for the bank itself and better outcomes for their customers. We're in kind of a fast- paced environment where banking itself had traditionally been pretty stagnant. There's this old adage in banking, 3- 6- 3, which means pay 3% interest charge, 6% interest and be on the golf course by 3: 00. It didn't require a lot of effort to maintain a bank. The moat was so big and impenetrable that it wasn't a technology- driven, it was a relationship- driven, business. I think that market has changed dramatically over the last decade and especially over the last five years. And so, the what of it is that we set to do, we saw a huge opportunity in that space to modernize data and accessibility of data for community banks, which by the way, represent 95% of banks in the United States. We all know the big banks and they have thousands of people at their disposal. Our focus really has been to sort of build that modern data and analytics platform that is uniquely structured for those banks to meet the needs of an ever- changing demand and requirements of their customers. Those customers could be small to medium size businesses. They could be established companies. They could be you and me. These changes and demands are pretty rapid. And so, that's what we set out to do and that's what we're doing right now.
Mike Lynn: Well, so I'm curious about the solutions that you offer these banks. First of all, 95% of the banks in the world or in the country-
Matt Beecher: In the United States.
Mike Lynn: ...in the United States are community banks. What differentiates a community bank from a larger bank? What's the volume level?
Matt Beecher: Simply, a typical community bank is going to have a handful of locations and they serve their local community. That's sort of the general definition of a community bank. If you take it a little bit further, it's generally in asset. If banks start getting above the$ 10 billion level in assets, that's when they start having a definition of a regional bank. But anything below that, we can generally categorize as a community bank, that are structured specifically to serve their communities geographically.
Mike Lynn: In terms of the systems and solutions that you're offering, what do those look like? Is it cash management? Is it HR? Is it payroll?
Matt Beecher: None of the above. We are strictly focused on data and data analytics. What we've built is a platform that allows these banks, and banks that have largely been ignored from a technology perspective, or have been held captive by larger service providers, like an FIS, Fiserv or Jack Henry, and yes, I will name them by name, and that doesn't allow... this is the classic sort of monolith that is really, really hard to manage. Some of these banks are running mainframes and coding in COBOL still to this day. So we're really not a front end user experience company. We're really backend in making sure that the piping is modern and allowing these banks to easily drive analytics out of our platform. We really built everything around three primary principles from our architecture. One is being cloud. The second is being modular. The third is being best in class. So we kind of look at what we do as ingesting and transforming data from multiple sources in a bank. Banks are notorious for being incredible endpoint consumers. They love endpoint solutions. That creates a huge problem. So we're able to take data, transform that data into a single unified language, store that data, and then we've created an application layer on top of that, which we can plug in our tools, which are advanced analytics, and third party tools as well that are best in class. So we're kind of bucking the trend of that monolith structure from a banking technology perspective.
Mike Lynn: Okay, so I'm starting to understand the business model a little better. We've got a cloud- based system. You are helping these banks by ingesting their data and allowing them to find insights about their customer base?
Matt Beecher: Yeah.
Mike Lynn: What are some of the things that you're helping them improve on with that knowledge, with that data?
Matt Beecher: So there's a couple things. First is sort of operational efficiency. We see this all the time, Matt Almeida and myself, when we're talking to banks. They're are hiring anywhere between 10 and 15, some cases 20 FTEs, that are doing nothing but Excel modeling and dragging data out of hard to get to data sources to provide some level of analytics. By the time they get to an answer, it's way too late. So from an operational perspective, we're really focused there on adding operational scale to an organization. The second piece of that is analytics. This is driving speed to an actionable insight through one of our tools that we call Spotlight. This is very ML and AI heavy. These are answering questions that we all would think are really easy to answer. How many of my customer have a Roth IRA? But it's hard to get to, and sometimes it's mind blowing, but it's really tough for these banks because their systems don't allow them to do that. Then, the last layer is application. Specificity, being able to segment your customer base, again, sounds really simple, really tough to do. Target future product recommendations, really tough to do. You think it would be simple because shoot you fire up Netflix every day and that does it by automatically. So we're kind of focused on building that knowledge graph that allows a bank simply to query the data, to search for new opportunities or our stack pushes those on opportunities to them in real time. What's kind of interesting, we've moved from what was static data consumption into really real time banking, and that's where everything's moving, and that's what gets really exciting for us. But you can't do that if you don't have a platform that isn't built for scale.
Mike Lynn: Well, speaking of platforms built for scale, you're obviously a MongoDB customer.
Matt Beecher: Yes, happily.
Mike Lynn: Well, that's great. I love to hear that. Had you used other platforms besides MongoDB prior to making the shift to MongoDB?
Matt Almeida: I can take that question. We really didn't. Mongo was very attractive to us due to a lot of the advanced capabilities that it comes with, the NoSQL structure and data strategy, which is what we wanted, and their cloud offering with Atlas is phenomenal and is right up our alley in terms of security, data isolation, and ease of use when plugged into other cloud resources.
Mike Lynn: I imagine the structure, the volumes, and the sheer velocity of data that you're dealing with, I mean, NoSQL seems like an easy pick. You said that you really didn't consider anything prior to MongoDB. You didn't consider a relational data source prior or anything like that?
Matt Almeida: This is a really interesting problem. What we see in a lot of these bank's scores is that they'll have a new piece of information, something new, something unique to just themselves. What accounts have e- pay? What are the different relational paradigms between different customers and different accounts? What often happens is because they use a relational model, they'll just keep the data normalized but add another table. What that has led to is the messy data that is the bread and butter of our problem. This causes the inefficiencies where what works for one bank in extracting this piece of data may not work for another because it's a totally different structural framework. And so, using NoSQL really forced us to think about, what is this data? How is it going to be used? And put some thought into what that data format looked like before we started developing. It has definitely cost us more work upfront than if we were just to run SQL and go with the standard of the industry. But because of this, it's forced us to become a lot more clever with what data we're looking at and what are our strategies at accessing that. I do think that's given us a sharper weapon and that we understand both the data and the industry better in creating this data format.
Mike Lynn: While we're on the tech side of things, Matt Almeida, can you talk a little bit about the frameworks and the architecture of Neocova solutions? I mean, you mentioned that it's cloud based, but what are you using for development platforms?
Matt Almeida: Our ingestion strategy is handle those in Python. We found that is a lot just more efficient and powerful when dealing with data and the data structures. All of our pipeline is essentially shaped out in Python, along with our data format. Then we have a front end to access this. We have a separate team that focuses on that, that works with Node. js. And so, we have a different kind of a web application stack that's built react node on AWS ECS to access this.
Mike Lynn: Python is one of my favorite languages to work in for sure. The popularity of Node. js, I mean, it's crazy. I guess at some point we'll want to talk about whether you're hiring, if that's something that you're interested in getting some attention on. Let's talk a little bit more about the business impact. You mentioned the importance of data. Talk a little bit about some of the metrics that you're looking at and exposing for your customers in the analytics realm. How important is that to your overall strategy as a business?
Matt Beecher: For us, it's about ROI. If a bank is going to go down the path of modernizing their" data stack," it has to be for a really good reason and there has to be a deliverable ROI on the backend. For a banking institution, it really, really comes down to two things. Can I drive more deposits? Well, actually three things. Drive more deposits, that is, acquire new customers. Can I sell my existing customers more product? I think that's probably the biggest use case that we're focused on right now. Then, the second... the third, I'm sorry, is how do I stop the outflow of assets to third parties? That's always sort of a risk, right? And if I take the last one first, again, sometimes it sounds simple, but banks don't have a great sense on a granular level of where outflows are happening in their banks, so that is a primary use case that we focus around. Hey, banker... and this, I'm using the voice of our system. We can push notifications to a bank and say," Hey, we're seeing a lot of deposits to cryptocurrency platforms" or" Here are your outflows going to cryptocurrency tools and platforms. Okay, now that drives strategy. What should we do about it? Because that's money leaving our bank. That's not a good thing." Other things, we see a lot of banks want to have answers to is better house- holding and better sense of what's happening with held away accounts. That is accounts that aren't part of the bank. mortgages are sort of the canonical. It's like," Oh, geez, we just saw Matt Almeida had a X thousand dollars output or outflow to Citibank. If we apply logic to that over time, it's probably a mortgage now that we can identify that Matt Almeida has a mortgage at Citibank. That's an opportunity for us. That doesn't exist at all in banking today. You would have to handcraft that analysis. I'm not joking. This is an Excel exercise. You got to have the right metadata transaction data to do it. It's impossible. That's on retail banking. On commercial banking, which again is a little bit different, but it's interesting things like banks make a lot of money on credit cards. If a bank can tell that there's a monthly outflow going to Amex spot from a commercial customer, that's an opportunity to say," Hey, we can actually do better. Why don't you think about opening up a business credit card with us?" It's going to be at a cheaper rate or whatever the case may be. Huge opportunities. Merchant services are another big one where banks do not have insight into this even though they offer merchant services. We just did this one example, real life. The bank had a hunch that they lost about$ 1 million of lost merchant fees a year. We ran through the analytics. That number wasn't one, it was five. So,$ 5 million of revenue was transacting outside of the bank with entities like Stripe and Square and other things that are able to offer merchant services. But the cost structure is so out of whack. It's three and a quarter percent where this can actually do that service at a point and a half, and they didn't know it. They just simply didn't know it. So those are the real life examples of being able to harness this data and then translate that into something that is truly actionable for the banks. It's powerful. We're not talking about basis point improvements. We're talking about massive, massive improvements for a bank.
Mike Lynn: Well, that's exciting. Knowing that the possibilities are there, that's really exciting. Just through visibility of the data. Can you talk a little bit about the scale? Are you comfortable sharing the number of customers you have and maybe just in terms of the volumes of data you're dealing with?
Matt Beecher: Look, from a customer perspective, myself, Matt, and team, we set out to really build, as any startup you go through fits and starts and a journey to find the product that really hits and resonates with the right target audience. I'd say, over the last few year, we've in earnest built this new sort of data stack which we're incredibly excited about. But look, over that period of time, we're now working with 10 plus banking customers. And so, in the world of banking, that is a lot, with a lot more that are very interested in what we're doing and excited about that. From a volume of data, Matt Almeida is probably better suited to answer that question.
Matt Almeida: I think a general ballpark is we'll see anywhere from about 10 to 20 gigs of data from a bank a month. That differs based on the size of the bank and the data sources they're interested in pulling in, but there's quite a lot. Not a massive amount. We're not dealing with petabytes a day, yet. Give us some time. But it's great that we have the tools able to handle that scale without causing a hiccup in the slightest with our strategy.
Mike Lynn: We do have some technical listeners. In addition to the framework that we talked about, are you comfortable sharing a little bit more detail about the architecture of your ingestion and analytics platform?
Matt Almeida: Right now, we're working with a partner to look at ways we can extract that more efficiently. But as it enters our system, we'll take that generally as a flat file and we'll start iterating through that to start to basically parse and process the data, make sure it gets normalized against our format, and then store that into Mongo. As soon as we have this kickoff, it's an event streaming type of system, because there's usually an order in which we need to ingest the files and then an order in which analytics we can kick off from that point. So once we are shutting levels of ingestion, this will actually kick off these events to a rule engine, which will run analysis and output some of these events, add certain new metrics to data that we wouldn't otherwise have captured. That forms basically the core of ingestion going into our storage layer. From this point, a user will have many different ways of interacting with this data. I mentioned the node web application, but we're looking at other extensions into, say, Salesforce or common BI tools. Different ways to visualize and understand this data. I don't think we do anything too incredibly fancy on that end of the house, but we have some very clean code that sits optimized for surfacing this data and delivering in a fashion that's going to serve our customers.
Mike Lynn: Now, are you doing any data enrichment where you reach out to other sources and enrich that data?
Matt Almeida: We do. That's actually part of the ingestion pipeline, is that once we get certain pieces of data in, say, maybe a transaction from the banking core, we actually have other third party APIs we can reach out to get it more detailed information. What does the merchant code translate to? Where did this occur? What type of branch of a retail store did this happen at? We can actually map really specific insights into what's happening with this user, with their transactions, and with their overall bank account.
Mike Lynn: Oh, that's fantastic. That's data visibility. So visibility starting with a transaction and then to an individual user. That's fantastic visibility. Are you able to share metrics around improvements that you've been able to help your customers experience?
Matt Beecher: Yeah, it's interesting. There's two different ways to measure success. A lot of banks will come to us and this is a generalize paraphrase quote, but it's pretty much consistent across all banks. Our core processing system, which again, are typically the big three, FAS, Fiserv, Jack Henry, our data is a mess. Therefore, we need a new core processor. That is just flat out wrong. It misses the point of everything. Your data isn't a mess. Well, your data is sort of a mess because of that, but what you really need is a better data management product or better data management solution. And so, it's almost saying, don't fight the tape here. From a metrics perspective, there's one way to approach this, which banks sort of look at this and saying," I need a data solution so I am going to build it myself." We see a lot of that." We are going to hire developers," and they don't even know what developer means, but they say this."I'm going to hire developers which, it's magical, everybody's the same. That's not a problem to do in a massively competitive. We're going to hire a bunch of developers and we're going to en engage with a data warehouse provider and we're good. What's the big deal? We're great." What happens 10 times out of 10 is that takes five years to implement. It costs millions and millions of dollars and they're kind of lost. Where we come in is typically try to fix those problems. We've done that now a few times. The second is, really those folks that are thinking about doing that, we can come at just on an ROI perspective. We're talking about 100%, 200%, 300% ROI just from a cost perspective of a different platform solution versus a DIY solution. So that's one side of it which is dramatically different than the... so that's the operational side. That's operational ROI, which again, it's for 200%, 300%, 400%. It's massive. But then on the practicality perspective, there's real stuff. Just having this insight and being able to get deeper amongst their customers in segment drives better campaigning for that bank and more appropriate discussion. I gave you the example of discovering merchant services that just didn't... or merchant fees that just no one knew about. Now that bank can take that a step further and start approaching these people. What we are are building right now though, are really exciting things. What I said before, we think about data source, data platform, and then applications on top. One of those applications that we're integrating right now with is Salesforce, and we're looking to do others as well. We're not building a CRM. We don't want to. But we want that CRM system to be able to use good data, and more importantly, at that level employ reverse ETL and push that data back down into the data system as well. Where I'm going with that is it now becomes push button campaigning inside their existing CRM system with those data insights. And that's really, really powerful stuff.
Mike Lynn: So really, it's a play in unleashing the power and the value of the data that they already have.
Matt Beecher: Yeah, that's exactly it. They have the data. It's there. It's there.
Mike Lynn: So, you're using MongoDB, we've covered that. What products in the MongoDB platform are you using?
Matt Almeida: We use Atlas, like I mentioned earlier. Phenomenal distributed cloud offering. We're also leaning into Charts right now. I know you've had a few sessions on Charts for this podcast before, but for anyone not familiar, it's a phenomenal BI tool. We actually have some interesting use case where we want to embed this in a per customer instant strategy. And so, the Mongo team has been kind enough to meet with us and go over the product roadmap and see how we can work together to take this product to the next level. So I'm really excited about where that grows into. We're also very excited to leverage Profiler. I've used that just a hint here and there built into Atlas. The BI Connector as well to find ways to export that data to other sources. So there is a litany of here and there tools that we have. Oh, can't forget Compass. Compass is a phenomenal strategy that we use to access and manipulate that data. So, we try to find any piece of the stack you all have and see how we can use it.
Matt Beecher: The BI Connector tool as well, Matthew. I don't know if you mentioned that, but we're going to utilize that as well.
Mike Lynn: So you're using that to connect it to some BI tools or using Tableau?
Matt Beecher: Yeah, that's exactly right. Part of our stack is that we have a very, very robust analytics tool. But there are instances out there where a bank is just embedded with an existing BI tool, which is like, okay, that's fine. We're not going to fight you on that one. So we are utilizing the BI Connectors for folks that are using Tableau or Power BI or other toolings on the BI stack and visualization stack. And that's pretty great for us because we call it BYOBI and it seems to work for our customers as well to have that optionality.
Mike Lynn: I mean, so if you're ingesting this data and you're providing analytics and, I guess... are you reformatting the data? Are you creating new data sources for them and then offering those back to the customers?
Matt Almeida: Yeah, we have a proprietary data format, which we manipulate everything into as we ingest. Like I said, we spent some time making sure we develop this to be optimal for the extraction, for the querying of data. So, we think that's where the magic lies. Once we can take all this data and put it into one kind of universal source, that's where we can unlock all the powerful access across any bank.
Mike Lynn: Well, it's certainly been a great conversation and there's so many amazing things happening at Neocova. I'm really excited for your growth and all of the great things happening. I'm curious, this has to be a great growth time for you. I would imagine that you're hiring. Are there specific roles you're looking to fill and what type of skills would you be looking for if you're hiring?
Matt Almeida: We're actually looking for data engineers right now. What we're looking for is someone who's astute and sharp with the use of Python. We like very clean, neatly typed, organized code. So, having a preference or a leaning towards working in that manner is a huge boon. We also, of course, with MongoDB and the NoSQL stack, really want to have a heavy background in that. Someone who's worked with the data models before and would be no stranger to jumping into a code base, picking up where we are, and helping us drive forward faster.
Mike Lynn: Well, if folks are listening and they've got some Python skills and an interest in working with MongoDB, where should they go to get more information?
Matt Almeida: We have our LinkedIn page. I know we have a post up on AngelList right now. I can find that link and see if we can attach it later.
Mike Lynn: We'll include links in the show notes. I want to thank you both for joining me. Is there anything else you'd like to mention before we begin to wrap up?
Matt Beecher: No, I think we've covered it all. We appreciate your time and the opportunity.
Mike Lynn: Likewise, likewise. Matt and Matt, thanks so much.
Matt Almeida: Thank you.
Matt Beecher: Thank you, Michael. It was a pleasure. Thanks.
Mike Lynn: Thanks so much to Matt and Matt for stopping by. Thanks to you, listeners. Make sure you check out mongodb. com/ world- 2022. MongoDB's biggest user conference, June 7th through the 9th in New York city. Use the code podcast when you register for 25% off your tickets and some very cool podcast swag.