$500K+ revenue opportunities unlocked within a month from timely insights delivered through no-code and self-service analytics
400 hours/month of manual analytic work eliminated through automation
25% reduction in data requests to engineers
25% reduction in data sync and replication costs expected
Arrive Logistics is a leading multimodal transportation and technology company delivering unparalleled service and custom strategic solutions. With over 1,500 employees, 3,500 customers, and 60,000 carriers in its network, Arrive is one of the largest firms in the 3PL industry, having surpassed $1.6 billion in 2021 revenue. The company has been recognized as a top workplace in Austin by Built in Austin and The Austin Statesman and in Chicago by The Chicago Tribune.
Alex Schwarm is the Head of Data Science, Data Engineering, and Analytics Engineering at Arrive Logistics. Dr. Schwarm holds a Ph.D. in Chemical Engineering and has over 30 patents across digital marketing, semiconductor manufacturing, solar manufacturing, and logistics.
We have Snowflake as our data warehouse and one of the most modern data stack to sync, model and deliver data from it. And, we have one of the best data teams I have had the pleasure to work with in my career. Yet, we were constantly underwater with data requests from our business teams.
While we have done a great job curating the standard data assets and BI dashboards, there is always demand for more. For example, recently, our business team needed to analyze our support call data to better understand our SLAs. These types of requests require a new data pipeline that would take 2-4 weeks to build, test and deploy. This is the right process, but the effort was not trivial, and the time lag would mean the business team could lose the opportunity to act on the insights.
The problem for most data engineering teams is that there is always more work to do than can be done. And most of it’s urgent. For us, all of our bandwidth was consumed maintaining our data platform, enhancing pipelines for our systems, and building new pipelines. Other strategic priorities were crowded out, and we struggled to keep up with the business’ growth. When new data requests came in from the business teams, there was a constant trade-off between short-term needs vs. long-term priorities.
Even when we had time to build requested pipelines, our analytics had a “last mile” problem. Our business teams live in the business apps they use, not our BI dashboards or data warehouses. They want data delivered and refreshed in the tools they use every day (CRM, Google Sheets, Slack etc.). To activate insights in business apps, we’d have to create custom integrations with these systems, which always fell to the bottom of our priority list.
As Arrive Logistics has grown and matured digitally, the number of business applications we use has grown exponentially. We typically see revenue double every year, which means our data footprint doubles every year too. And as our business matures, so does the complexity of our analytics and the scope of the business metrics we track. We probably see the number of analytics assets increase 4x yearly.
When you put it all together, we’re looking at 8-10x increases year-over-year in the data we process. This data improves our business performance and is a huge competitive differentiator for us. But this also means that traditional data sync tools are no longer cost-effective. These tools often are not nimble enough to build new connectors and support our needs.
We partnered with Savant to address these challenges. Their no-code analytics automation technology has been a game-changer for us. It’s a full-stack platform for data sync, analytics, and data delivery for our analysts. My team uses the platform every day to automate end-to-end dataflows and deliver insights at lightning-fast speeds – without burdening our data engineers.
In just three months, the team has built over 25 analytics bots in Savant. These bots range from marketing-and-sales analytics to HR, Finance and product analytics.
Our business teams are unlocking new revenue opportunities from these insights, and our data engineering team is happy because our analysts can self-service their needs. And I’m happy because my data infrastructure bill is down. It doesn’t get much better than this.
Being in an incredibly fast-moving and large-scale industry, and growing very quickly, means that having an understanding of what is working well and what isn’t can have a dramatic business impact. Trying new processes and strategies requires new data collection and new pipelines. Using Savant allowed us to quickly analyze the results of a new business strategy around our pricing by pulling in disparate data from multiple new sources. Without Savant, we would have had to wait for weeks while the pipeline was being built and would have missed out on over $500k in revenue opportunities. Getting to these insights quickly – while also ensuring data is accurate and comprehensive – allows us to do more experimentation and data-driven analysis and optimize our business results even more rapidly.
We estimate each bot saves at least 30 minutes of manual analytics work for my team. In July alone, Savant bots ran over 900 times and saved at least 450 hours of manual work. Over 12 months, we expect over 5000 hours of time back to business.
Savant lets our analysts pull raw data from various sources into our data warehouse, model and deliver the insights back to the business app without having to write any code. These full-stack capabilities have empowered analysts to take ownership of the entire data pipeline (ETL, analytics and reverse ETL) and removed the dependency on data engineers.
We consider the pipelines built by analysts as “prototype” pipelines. The data engineering team gets involved in productizing the pipeline only if we estimate sufficient ROI. Using Savant in this process delivers value in three ways. First, we can respond to stakeholder requests faster and our time to value is much faster because analysts are not dependent on data engineering and their associated backlog and schedules. Second, we iterate more quickly, and ultimately more thoroughly, with stakeholders by enhancing pipelines more quickly without having to depend on data engineering prioritization and scheduling. Third, our data engineers only get involved when we see the need to fully productize the pipeline. This approach has led to a significant reduction in new data requests for our data engineering team. On average, we have seen demand for the data engineering team reduce by at least 25%.
Savant has bi-directional connectors to most cloud business apps, data warehouses, file systems and BI platforms. And they are also able to build new connectors quickly that others typically would not. We are starting to leverage Savant to sync data to and from our data warehouse, especially to systems that are traditionally not supported by other vendors or where data volumes are large, and other vendors are simply not cost-effective. With this approach, we are expecting to reduce our data infrastructure costs by 25% over the next 12 months while our data volume is growing 8-10x year-over-year.
Marketing and Sales teams have an insatiable demand for performance measurement and typically have very time-sensitive needs. In many cases, if a data organization cannot support their data needs, then these groups will hack a solution themselves. These siloed, mini data platforms can grow in complexity and cost very quickly. At some point, the data organization is called in to address mounting issues, but the project is now much more challenging than if we had been able to own it from the start.
The problem with inheriting these systems from business teams is that they likely do not follow our data best practices and standards, are missing necessary documentation, and aren’t built for scalability and maintainability. “Unwinding” these implementations is costly, time-consuming, and typically results in negative tension between stakeholders and data organizations. By using Savant, we can avoid these issues by quickly implementing solutions and prototyping what data pipelines and resulting data assets that stakeholders need.