Blog

Home / Blog

The benefits of unifying API access through SQL

William Tsu
Data Analyst
Experienced data analyst working with data visualization, cloud computing and ETL solutions.
February 18, 2022


Microsoft SQL Server is an excellent choice for enterprises of all sizes and industries that require an enterprise-class relational database solution. SQL Server, which has been around for more than 30 years, is a market leader in relational database management systems (RDBMS) and now comes with features for data analytics, business intelligence, and machine learning.

Microsoft SQL Server supports a wide range of languages and client libraries. However, enterprises are increasingly turning to more adaptable and simple-to-implement API-driven solutions that allow users to access the server or data through a single REST API interface. It involves not only programming a communication bridge between the Database and the endpoints, but also performance, security, and compliance.

APIs for database-backed web applications grew, producing a variety of XML and then JSON outputs. To assist developers in consuming those outputs, programming languages grew libraries. Learning to use each API individually was difficult, and combining outputs from multiple APIs was even more difficult.

What events are planned for a specific location and time? To do so, it had to use a half-dozen APIs, each of which required a different method of making a request and unpacking the reaction.

Since 2014, Open Data, also known as OData, has been an OASIS standard. In theory, any database-backed web app could now have an "OData head" that generates a default API, needing no code to be written by app developers and no new request/response protocols to be learned by API developers.

Even more now, software development necessitates developers composing solutions using an increasing number of APIs. Often, a library exists to wrap each API in your preferred programming language, saving you the effort of making raw REST calls and parsing the results. However, because each wrapper has its way of representing results, you must normalize those representations when composing a multi-API solution. Because blending results occur in a language-specific manner, your solution is bound to that language. And if that language is JavaScript, Python, Java, or C#, it is not the most universal or strong way to query (or update) a database.

What is the most effective method? SQL has been hiding in plain sight the entire time. SQL has re-established itself as the preeminent data interface after being battle-hardened for decades and evolving beyond the pure relational model. And it's poised to become the API unifier that we desperately need now more than ever.

Foreign data wrappers for APIs

Steampipe (steampipe.io) is a publicly-available platform that retrieves information from numerous APIs and utilizes it to propagate database tables. The database is Postgres, which is currently a structure for building different types of database-like systems by developing extensions that deeply personalize the central part. The foreign-data wrapper (FDW) is a type of Postgres extension that generates tables from data sources. Steampipe includes a Postgres case that heaps an API-oriented foreign-data wrapper. The FDW then interacts with a mounting group of plug-ins that consumes APIs and feeds information into Postgres tables via the FDW.

To put these abstractions into context, consider how you would approach the following problem. You run public AWS services and want to know if any of their endpoints have been flagged as vulnerable in Shodan, a system that scans public endpoints. Your solution is most likely something like this:

1. Discover how to use the AWS API to locate your endpoints

2. Discover how to use the Shodan API to evaluate your endpoints

3. Understand how to use those two APIs together to answer the question

This plug-in is configured to verify the APIs using the same credentials as if you were using the APIs straightforwardly. However, you do not need to know anything else about the underlying REST calls or the libraries that encase them. The solution is composed of tables that function consistently within and across APIs. You inspect them (AWS ec2 instance, shodan host) to determine the names of their columns, and then bring them in the time-honored SQL fashion.

Every API has a plug-in

The existence of plug-ins to map both APIs to tables is required for this two-API solution to work. That wouldn't be necessary if both services used OData. The Application programming interfaces could be interrogatable by default, though perhaps not with the sophistication that SQL provides. However, these two solutions, like the majority, do not provide a standard crossing point to their APIs. As a result, they must be covered by a layer of one another. Steampipe's plug-in SDK makes life easier for plug-in authors by isolating connection management, retry logic, caching, and, of course, the mapping of API results to tables.

Go is used to write Steampipe plug-ins. They make use of a large collection of Go libraries that wrap Application programming interfaces. However, only the researchers of plug-ins must be aware of this. As a Steampipe developer, you only see tables and write SQL. Due to the evolution of SQL, this now adds elements including such Common Table Expressions (aka CTEs or WITH clauses) and JSON columns. However, SQL is still SQL.

Could such plug-ins be developed for each API? Steampipe debuted near the beginning of last year with plug-ins; there are now over sixty, and the digits are rapidly increasing. So far, the majority of the content has been written by the core group, but external contributions are increasing. It's simple to create a plug-in that maps an API to a set of tables thanks to the plug-in SDK, which handles the heavy lifting.

Standing on the shoulders of Postgres

Steampipe encapsulates all of Postgres' capabilities by incorporating it. We can enter API-sourced foreign tables. For example. Whereas the true advantage of Steampipe is live API querying, you can also generate materialized points of view to endure that information and write Postgres functions to perform on it. Additional Postgres extensions can also be encumbered and used through Steampipe tables. For instance, Postgres' designed tableful can carry out SQL crosstabs on spreadsheet information from Steampipe's Google Sheets plug-in.

Another advantage of leveraging Postgres is that it allows any Postgres-compatible API client to connect to Steampipe. This encompasses command-line tools like SQL as well as graphical user interface (GUI) tools like Tableau, Power BI, Metabase, and Superset, which add visual representation and interactivity to live API data.

Postgres might by no means be as extensively entrenched as SQLite, but it is extra able, and it is progressively more being used to control a network of interworking data-working tools. Steampipe stretches Postgres to offer combined API right of entry and familiar SQL surroundings for reasoning concerning the information they give.

Methods for Integrating SQL Server REST API

Using Microsoft SQL Server Integration Services

One method is to use Microsoft SQL Server Integration Services (SSIS) to load data from REST APIs into Microsoft SQL Server. This method necessitates the creation of a data flow task via a REST-based connection. You must then use the "DataModel" property to trace your incoming data and correctly load it into your Microsoft SQL Server database. This is one of the methods for configuring SQL Server REST API Integration.

The Microsoft SQL Server Integration Services (SSIS) is a powerful component and functionality of the Microsoft SQL Server Database that enables users to easily perform a variety of complex data migration tasks. It supports and provides a variety of tools, including data warehousing tools, which help to automate ETL to some extent, workflow tools, which help to automate the data migration process, and a diverse set of data connectivity tools, which allow users to unify data.

Using custom code snippets

One method is to use custom code snippets to configure the SQL Server REST API Integration. This method necessitates the installation of the JDBC driver for Microsoft SQL Server, followed by the use of your linkage URL to interconnect between them. Then, you can use a statement object to perform SQL-based operations and insert data from the REST API into Microsoft SQL Server databases.

Using Hevo Data

A managed service, the No-code Data Pipeline platform, such as Hevo Data, enables you to load data in real-time from REST APIs (among 100+ Sources) to Microsoft SQL Server. Hevo has a low learning curve, allowing users to load data in a matter of minutes without sacrificing performance! Furthermore, Hevo includes comprehensive out-of-the-box assimilation assistance with multiple sources such as databases, files, analytics engines, and so on, giving users the flexibility to bring in data of all types in the most seamless way possible!

Conclusion

Microsoft SQL Server is accessible through a variety of programming languages and client libraries, but companies endorsing API-driven development strategies are attempting to unify access through a single REST interface. Creating such an interface, however, entails much more than simply computationally integrating a connection between the database and diverse endpoints; the developer must also consider performance, security, and compliance. Furthermore, the API should be capable of supporting entry to views and stored procedures.