www.influxdata.com Open in urlscan Pro
172.67.213.236  Public Scan

Submitted URL: http://influxdata.com/
Effective URL: https://www.influxdata.com/main/
Submission: On October 03 via manual from US — Scanned from DE

Form analysis 2 forms found in the DOM

<form id="mktoForm_1212" novalidate="novalidate" class="mktoForm mktoHasWidth mktoLayoutAbove">
  <style type="text/css"></style>
  <div class="mktoFormRow">
    <div class="mktoFieldDescriptor mktoFormCol">
      <div class="mktoOffset"></div>
      <div class="mktoFieldWrap mktoRequiredField"><label for="Email" id="LblEmail" class="mktoLabel mktoHasWidth">
          <div class="mktoAsterix">*</div>Email
        </label>
        <div class="mktoGutter mktoHasWidth"></div><input id="Email" name="Email" maxlength="255" aria-labelledby="LblEmail InstructEmail" type="email" class="mktoField mktoEmailField mktoHasWidth mktoRequired" aria-required="true"><span
          id="InstructEmail" tabindex="-1" class="mktoInstruction"></span>
        <div class="mktoClear"></div>
      </div>
      <div class="mktoClear"></div>
    </div>
    <div class="mktoClear"></div>
  </div>
  <div class="mktoFormRow">
    <div class="mktoFieldDescriptor mktoFormCol">
      <div class="mktoOffset"></div>
      <div class="mktoFieldWrap mktoRequiredField"><label for="Privacy_Policy_Consent__c" id="LblPrivacy_Policy_Consent__c" class="mktoLabel mktoHasWidth checkboxLabel">
          <div class="mktoAsterix" style="display: none;">*</div>
        </label>
        <div class="mktoGutter mktoHasWidth"></div>
        <div class="mktoLogicalField mktoCheckboxList mktoHasWidth mktoRequired"><input name="Privacy_Policy_Consent__c" id="mktoCheckbox_34957_0" type="checkbox" value="Yes" aria-required="true"
            aria-labelledby="LblPrivacy_Policy_Consent__c LblmktoCheckbox_34957_0 InstructPrivacy_Policy_Consent__c" class="mktoField"><label for="mktoCheckbox_34957_0" id="LblmktoCheckbox_34957_0" class="checkboxLabel">By submitting this form, you
            agree to the <a href="https://www.influxdata.com/legal/privacy-policy/">Privacy Policy</a> and <a href="https://www.influxdata.com/legal/cookie-policy/">Cookie Policy</a>.</label></div><span id="InstructPrivacy_Policy_Consent__c"
          tabindex="-1" class="mktoInstruction"></span>
        <div class="mktoClear"></div>
      </div>
      <div class="mktoClear"></div>
    </div>
    <div class="mktoClear"></div>
  </div>
  <div class="mktoFormRow"><input type="hidden" name="utm_source_lt__c" class="mktoField mktoFieldDescriptor mktoFormCol" value="">
    <div class="mktoClear"></div>
  </div>
  <div class="mktoFormRow"><input type="hidden" name="utm_medium_lt__c" class="mktoField mktoFieldDescriptor mktoFormCol" value="">
    <div class="mktoClear"></div>
  </div>
  <div class="mktoFormRow"><input type="hidden" name="utm_campaign_lt__c" class="mktoField mktoFieldDescriptor mktoFormCol" value="">
    <div class="mktoClear"></div>
  </div>
  <div class="mktoFormRow"><input type="hidden" name="utm_content_lt__c" class="mktoField mktoFieldDescriptor mktoFormCol" value="">
    <div class="mktoClear"></div>
  </div>
  <div class="mktoFormRow"><input type="hidden" name="utm_term__c" class="mktoField mktoFieldDescriptor mktoFormCol" value="">
    <div class="mktoClear"></div>
  </div>
  <div class="mktoFormRow"><input type="hidden" name="cid__c" class="mktoField mktoFieldDescriptor mktoFormCol" value="">
    <div class="mktoClear"></div>
  </div>
  <div class="mktoFormRow"><input type="hidden" name="GCLID__c" class="mktoField mktoFieldDescriptor mktoFormCol" value="">
    <div class="mktoClear"></div>
  </div>
  <div class="mktoFormRow"><input type="hidden" name="msclkid__c" class="mktoField mktoFieldDescriptor mktoFormCol" value="">
    <div class="mktoClear"></div>
  </div>
  <div class="mktoFormRow"><input type="hidden" name="fbclid__c" class="mktoField mktoFieldDescriptor mktoFormCol" value="">
    <div class="mktoClear"></div>
  </div>
  <div class="mktoButtonRow"><span class="mktoButtonWrap mktoNative"><button type="submit" class="mktoButton">Submit</button></span></div><input type="hidden" name="formid" class="mktoField mktoFieldDescriptor" value="1212"><input type="hidden"
    name="munchkinId" class="mktoField mktoFieldDescriptor" value="972-GDU-533"><input type="hidden" name="mkto_content_name" class="mktoField mktoFieldDescriptor" value="Newsletter Signup">
</form>

<form novalidate="novalidate" style="font-family: inherit; font-size: 12px; color: rgb(51, 51, 51); visibility: hidden; position: absolute; top: -500px; left: -1000px; width: 1600px;" class="mktoForm mktoHasWidth mktoLayoutAbove"></form>

Text Content

Keep full control of your data without compromising performance with InfluxDB
Clustered. Learn More

Skip to content
Products


PRODUCTS

Managed by Us

Simplify and scale with cloud services

--------------------------------------------------------------------------------

InfluxDB Cloud Serverless
InfluxDB Cloud Dedicated

Managed by You

For organizations that need full control

--------------------------------------------------------------------------------

InfluxDB Clustered

Integrations & Collectors

300+ plugins, easy interoperability

--------------------------------------------------------------------------------

Collectors
Client Libraries
Lakehouse / Warehouse

Other Resources

--------------------------------------------------------------------------------

Performance Comparisons
Find the Right Product
Platform Overview
Use Cases


USE CASES

DATA WORKLOADS

Real-Time Analytics

Collect, analyze, and predict in real time

--------------------------------------------------------------------------------

Infrastructure Monitoring
Real-Time Monitoring
DevOps Monitoring
Security Event Monitoring
Application Monitoring
Gaming Analytics

Network & Device Telemetry

Modernize telemetry data collection and analysis

--------------------------------------------------------------------------------

Network Monitoring
Aerospace Monitoring
IoT Analytics

Modern Data Historian

Modernize operations with time series data

--------------------------------------------------------------------------------

Renewable Energy Monitoring
Process Modernization
Digital Transformation
Predictive Maintenance

Machine Learning & AI

Turn sensor data into actionable intelligence

--------------------------------------------------------------------------------

Predictive Analytics
ML with Lakehouses
Explore All Use Cases
Customer Stories
See how Teréga reduced total cost
of ownership by 50%

Industries

--------------------------------------------------------------------------------

Aerospace
Energy & Utilities
Financial Services
Industrial IoT
Consumer IoT
Manufacturing
Gaming
Telecommunications
Developers


DEVELOPERS

Learn

--------------------------------------------------------------------------------

Developer Resources
Blog
Customers
Partners
Support
Webinars

Build

--------------------------------------------------------------------------------

Documentation
InfluxDB OSS
Telegraf Data Collection
AWS
Integrations

Connect

--------------------------------------------------------------------------------

InfluxDB University
Community
Events and Live Training

Featured Resources

--------------------------------------------------------------------------------

Streamline Migration
to InfluxDB 3.0
Getting Started with
MING Stack for IoT
Developer Overview
Pricing
Contact Us
Sign In

Log in to InfluxDB Cloud 2.0
Log in to InfluxDB Enterprise
Log in to InfluxDB Cloud 1.x
Start Now
Start Now
Products

Products

MANAGED BY US

Simplify and scale with Cloud services

InfluxDB Cloud Serverless
InfluxDB Cloud Dedicated

MANAGED BY YOU

For organizations that need full control

InfluxDB Clustered

INTEGRATIONS & COLLECTORS

300+ plugins, easy interoperability

Collectors
Client Libraries
Lakehouse / Warehouse

OTHER RESOURCES

Performance Comparisons
Find the Right Product
Platform Overview
Use Cases

Use Cases

DATA WORKLOADS

REAL-TIME ANALYTICS

Collect, analyze, and predict in real time

Infrastructure Monitoring
Real-Time Monitoring
DevOps Monitoring
Security Event Monitoring
Application Monitoring
Gaming Analytics

NETWORK & DEVICE TELEMETRY

Modernize telemetry data collection and analysis

Network Monitoring
Aerospace Monitoring
IoT Analytics

MODERN DATA HISTORIAN

Modernize operations with time series data

Renewable Energy Monitoring
Process Modernization
Digital Transformation
Predictive Maintenance

MACHINE LEARNING & AI

Turn sensor data into actionable intelligence

Predictive Analytics
ML with Lakehouses

INDUSTRIES

Aerospace
Energy & Utilities
Financial Services
Industrial Iot
Consumer IoT
Manufacturing
Gaming
Telecommunications
Use Cases
Customer Stories
See how Teréga reduced total
cost of ownership by 50%
Developers

Developers

LEARN

Developer Resources
Blog
Customers
Partners
Support
Webinars

BUILD

Documentation
InfluxDB OSS
Telegraf Data Collection
AWS
Integrations

CONNECT

InfluxDB University
Community
Events and Live Training

FEATURED RESOURCES

Streamline Migration to InfluxDB 3.0
Getting Started with
MING Stack for IoT
Developer Overview
Pricing
Contact Us
Sign In

Sign In

Log in to InfluxDB Cloud 2.0
Log in to InfluxDB Enterprise
Log in to InfluxDB Cloud 1.x



EVERY SECOND COUNTS.


EVERY
SECOND
COUNTS.

Analyze a billion series at a fraction of the cost. InfluxDB is the platform for
time series data.

Get started with InfluxDB



TIME SERIES DATA FOR EVERY WORKLOAD


REAL-TIME ANALYTICS

Query data immediately upon arrival for real-time insights across systems and
applications

Learn More



NETWORK AND DEVICE TELEMETRY

Monitor and control devices and sensors in IoT, network, and field deployments.

Learn More



MODERN DATA HISTORIAN

Unleash and transform on-site industrial OT data in the manufacturing plant and
on the factory floor.

Learn More



MACHINE LEARNING AND AI

Share real-time data with open ML/AI tools to create predictive analytics for
automated, intelligent applications and systems.

Learn More



REAL-TIME ANALYTICS

Query data immediately upon arrival for real-time insights across systems and
applications.

Learn More



NETWORK & DEVICE TELEMETRY

Monitor and control devices and sensors in IoT, network, and field deployments.

Learn More



MODERN DATA HISTORIAN

Unleash and transform on-site industrial OT data in the manufacturing plant and
on the factory floor.

Learn More



MACHINE LEARNING & AI

Share real-time data with open ML/AI tools to create predictive analytics for
intelligent applications and systems.

Learn More



PURPOSE-BUILT FOR REAL-TIME WITH PROVEN PERFORMANCE AT SCALE

SINGLE DATASTORE

Run analytics across multiple workload types with a single purpose-built
platform.

COLUMNAR DESIGN

Scales without limits with built-in storage and query performance optimization.

NATIVE SQL

Query directly with InfluxDB using standard SQL.

REAL-TIME QUERY

Sub-second query response on leading-edge data.

UNLIMITED CARDINALITY

Analyze billions of time series data points per second without limitations or
caps.

SUPERIOR DATA COMPRESSION

Maximize data compression to store more data at a fraction of the cost.


COMMITTED TO OPEN SOURCE SINCE DAY ONE

Since the first availability of InfluxDB in 2012, InfluxData has stayed true to
its open source roots—first as a distributor, then as a user, and now as a
contributor.


USE

We build with open source components from the Arrow ecosystem.


DISTRIBUTE

We deliver open source software including InfluxDB, Telegraf, and surrounding
tools.


CONTRIBUTE

We contribute upstream to open data tools, including FlightSQL and DataFusion.

INDUSTRIES


TIME SERIES ANALYTICS FOR EVERY INDUSTRY

Today, InfluxDB deployments span multiple industries, with customers running at
scale in any environment—public and private cloud, on-premises, and at the edge.

Manufacturing
Energy and Utilities
Telecommunications
Consumer IoT
Industrial IoT
Aerospace



MANUFACTURING

Analyze production data streams in real-time to identify bottlenecks, prevent
downtime, and power predictive maintenance for your industrial equipment.

Learn More



ENERGY AND UTILITIES

Monitor, optimize, and manage renewable energy and traditional power systems to
achieve smart grid balancing and optimization. Forecast and predict maintenance
needs for renewable energy sources, such as wind turbines and solar farms.

Learn More



TELECOMMUNICATIONS

Analyze network performance and usage patterns in telecommunication
infrastructure. Improve quality of service, optimize infrastructure resources,
and reduce operational costs.

Learn More



CONSUMER IOT

Use real-time monitoring and analytics to optimize energy consumption and
improve quality of life.

Learn More



INDUSTRIAL IOT

Optimize operational processes with real-time insights from industrial equipment
and sensors.

Learn More



AEROSPACE

Get real-time insights from satellites, networks, and every stage of the launch
operation process. Reduce errors and accelerate time to market in this
mission-critical space.

Learn More



MANUFACTURING

Analyze production data streams in real-time to identify bottlenecks, prevent
downtime and power predictive maintenance for your industrial equipment.

Learn More



ENERGY AND UTILITIES

Monitor, optimize, and manage renewable energy and traditional power systems to
achieve smart grid balancing and optimization. Forecast and predict maintenance
needs for your renewable energy sources such as wind turbines and solar farms.

Learn More



TELECOMMUNICATIONS

Analyze network performance and usage patterns in telecommunication
infrastructure. Improve quality of service, optimize infrastructure resources,
and reduce operational costs.

Learn More



CONSUMER IOT

Use real-time monitoring and analytics to optimize energy consumption and
improve quality of life.

Learn More



INDUSTRIAL IOT

Optimize operational processes with real-time insights from industrial equipment
and sensors.

Learn More



AEROSPACE

Get real-time insights from satellites, networks, and every stage of the launch
operation process. Reduce errors and accelerate time to market in this
mission-critical space.

Learn More



DEVELOPERS CHOOSE INFLUXDB

More downloads, more open source users, and a larger community than any other
time series database in the world.


1B+

Downloads of InfluxDB via Docker

1M+

Open source instances running daily

2,800+

InfluxDB contributors

500M+

Telegraf downloads

#1

Time series database
Source: DB Engines


CODE IN THE LANGUAGES YOU LOVE

No need to conform to a new language or technology. InfluxDB supports multiple
programming and query languages, with client libraries and integrations to make
things simple, all powered by a RESTful API.

See All Integrations
ReadWrite

                  from influxdb_client_3 import InfluxDBClient3
import pandas
import os

database = os.getenv('INFLUX_DATABASE')
token = os.getenv('INFLUX_TOKEN')
host="https://us-east-1-1.aws.cloud2.influxdata.com"

def querySQL():
  client = InfluxDBClient3(host, database=database, token=token)
  table = client.query(
    '''SELECT
        room,
        DATE_BIN(INTERVAL '1 day', time) AS _time,
        AVG(temp) AS temp,
        AVG(hum) AS hum,
        AVG(co) AS co
      FROM home
      WHERE time >= now() - INTERVAL '90 days'
      GROUP BY room, _time
      ORDER BY _time'''
  )

  print(table.to_pandas().to_markdown())

  client.close()
  
querySQL()
                  
Copy

c

                  from influxdb_client_3 import InfluxDBClient3
import os

database = os.getenv('INFLUX_DATABASE')
token = os.getenv('INFLUX_TOKEN')
host="https://us-east-1-1.aws.cloud2.influxdata.com"

def write_line_protocol():
  client = InfluxDBClient3(host, database=database, token=token)
  
  record = "home,room=Living\\ Room temp=22.2,hum=36.4,co=17i"
  
  print("Writing record:", record )
  client.write(record)
  
  client.close()

write_line_protocol()

                  
Copy

c

                  @main
struct QueryCpuData: AsyncParsableCommand {
  @Option(name: .shortAndLong, help: "The name or id of the bucket destination.")
  private var bucket: String

  @Option(name: .shortAndLong, help: "The name or id of the organization destination.")
  private var org: String

  @Option(name: .shortAndLong, help: "Authentication token.")
  private var token: String

  @Option(name: .shortAndLong, help: "HTTP address of InfluxDB.")
  private var url: String
}

extension QueryCpuData {
  mutating func run() async throws {
    //
    // Initialize Client with default Bucket and Organization
    //
    let client = InfluxDBClient(
            url: url,
            token: token,
            options: InfluxDBClient.InfluxDBOptions(bucket: bucket, org: org))

    // Flux query
    let query = """
                from(bucket: "\(self.bucket)")
                    |> range(start: -10m)
                    |> filter(fn: (r) => r["_measurement"] == "cpu")
                    |> filter(fn: (r) => r["cpu"] == "cpu-total")
                    |> filter(fn: (r) => r["_field"] == "usage_user" or r["_field"] == "usage_system")
                    |> last()
                """

    print("\nQuery to execute:\n\(query)\n")

    let response = try await client.queryAPI.queryRaw(query: query)

    let csv = String(decoding: response, as: UTF8.self)
    print("InfluxDB response: \(csv)")

    client.close()
  }
}
                  
Copy

c

                  import ArgumentParser
import Foundation
import InfluxDBSwift
import InfluxDBSwiftApis

@main
struct WriteData: AsyncParsableCommand {
  @Option(name: .shortAndLong, help: "The name or id of the bucket destination.")
  private var bucket: String

  @Option(name: .shortAndLong, help: "The name or id of the organization destination.")
  private var org: String

  @Option(name: .shortAndLong, help: "Authentication token.")
  private var token: String

  @Option(name: .shortAndLong, help: "HTTP address of InfluxDB.")
  private var url: String
}

extension WriteData {
  mutating func run() async throws {
    //
    // Initialize Client with default Bucket and Organization
    //
    let client = InfluxDBClient(
            url: url,
            token: token,
            options: InfluxDBClient.InfluxDBOptions(bucket: bucket, org: org))

    //
    // Record defined as Data Point
    //
    let recordPoint = InfluxDBClient
            .Point("demo")
            .addTag(key: "type", value: "point")
            .addField(key: "value", value: .int(2))
    //
    // Record defined as Data Point with Timestamp
    //
    let recordPointDate = InfluxDBClient
            .Point("demo")
            .addTag(key: "type", value: "point-timestamp")
            .addField(key: "value", value: .int(2))
            .time(time: .date(Date()))

    try await client.makeWriteAPI().write(points: [recordPoint, recordPointDate])
    print("Written data:\n\n\([recordPoint, recordPointDate].map { "\t- \($0)" }.joined(separator: "\n"))")
    print("\nSuccess!")

    client.close()
  }
}
                  
Copy

c

                  import {InfluxDBClient} from '@influxdata/influxdb3-client'
import {tableFromArrays} from 'apache-arrow';

const database = process.env.INFLUX_DATABASE;
const token = process.env.INFLUX_TOKEN;
const host = "https://us-east-1-1.aws.cloud2.influxdata.com";

async function main() {
    const client = new InfluxDBClient({host, token})
    const query = `
    SELECT
      room,
      DATE_BIN(INTERVAL '1 day', time) AS _time,
      AVG(temp) AS temp,
      AVG(hum) AS hum,
      AVG(co) AS co
    FROM home
    WHERE time >= now() - INTERVAL '90 days'
    GROUP BY room, _time
    ORDER BY _time
    `
    const result = await client.query(query, database)

    const data = {room: [], day: [], temp: []}

    for await (const row of result) {
      data.day.push(new Date(row._time).toISOString())
      data.room.push(row.room)
      data.temp.push(row.temp)
    }

    console.table([...tableFromArrays(data)])

    client.close()
}

main()
                  
Copy

c

                  import {InfluxDBClient} from '@influxdata/influxdb3-client'

const database = process.env.INFLUX_DATABASE;
const token = process.env.INFLUX_TOKEN;
const host = "https://us-east-1-1.aws.cloud2.influxdata.com";

async function main() {
    const client = new InfluxDBClient({host, token})

    const record = "home,room=Living\\ Room temp=22.2,hum=36.4,co=17i"
    await client.write(record, database)
    client.close()
}

main()
                  
Copy

c

                  package com.influxdb3.examples;

import com.influxdb.v3.client.InfluxDBClient;
import java.util.stream.Stream;

public final class Query {
    private Query() {
        //not called
    }

    /**
     * @throws Exception
     */
    public static void main() throws Exception {

        final String hostUrl = "https://us-east-1-1.aws.cloud2.influxdata.com";
        final char[] authToken = (System.getenv("INFLUX_TOKEN")).toCharArray();
        final String database = System.getenv("INFLUX_DATABASE");

        try (InfluxDBClient client = InfluxDBClient.getInstance(hostUrl, authToken, database)) {
            String sql = """
                SELECT
                    room,
                    DATE_BIN(INTERVAL '1 day', time) AS _time,
                    AVG(temp) AS temp, AVG(hum) AS hum, AVG(co) AS co
                FROM home
                WHERE time >= now() - INTERVAL '90 days'
                GROUP BY room, _time
                ORDER BY _time""";

            String layoutHeading = "| %-16s | %-12s | %-6s |%n";
            System.out.printf("--------------------------------------------------------%n");
            System.out.printf(layoutHeading, "day", "room", "temp");
            System.out.printf("--------------------------------------------------------%n");

            String layout = "| %-16s | %-12s | %.2f |%n";
            try (Stream stream = client.query(sql)) {
                stream.forEach(row -> System.out.printf(layout, row[1], row[0], row[2]));
            }
        }
    }
}
                  
Copy

c

                  package com.influxdb3.examples;

import com.influxdb.v3.client.InfluxDBClient;

public final class Write {

    public static void main() throws Exception {

        final String hostUrl = "https://us-east-1-1.aws.cloud2.influxdata.com";
        final char[] authToken = (System.getenv("INFLUX_TOKEN")).toCharArray();
        final String database = System.getenv("INFLUX_DATABASE");
        try (InfluxDBClient client = InfluxDBClient.getInstance(hostUrl, authToken, database)) {
            String record = "home,room=Living\\ Room temp=22.2,hum=36.4,co=17i";
            System.out.printf("Write record: %s%n", record);
            client.writeRecord(record);
        }
    }
}
                  
Copy

c

                  InfluxDB2::Client.use('https://localhost:8086', 'my-token', org: 'my-org') do |client|

  result = client
    .create_query_api
    .query_raw(query: 'from(bucket:"my-bucket") |> range(start: 1970-01-01) |> last()')
  puts result
end
                  
Copy

c

                  InfluxDB2::Client.use('https://localhost:8086', 'my-token',
                      bucket: 'my-bucket',
                      org: 'my-org',
                      precision: InfluxDB2::WritePrecision::NANOSECOND) do |client|

  write_api = client.create_write_api
  write_api.write(data: 'h2o,location=west value=33i 15')
end
                  
Copy

c

                  package example

import org.apache.pekko.actor.ActorSystem
import org.apache.pekko.stream.scaladsl.Sink
import com.influxdb.client.scala.InfluxDBClientScalaFactory
import com.influxdb.query.FluxRecord

import scala.concurrent.Await
import scala.concurrent.duration.Duration

object InfluxDB2ScalaExample {

  implicit val system: ActorSystem = ActorSystem("it-tests")

  def main(args: Array[String]): Unit = {

    val influxDBClient = InfluxDBClientScalaFactory
      .create("http://localhost:8086", "my-token".toCharArray, "my-org")

    val fluxQuery = ("from(bucket: \"my-bucket\")\n"
      + " |> range(start: -1d)"
      + " |> filter(fn: (r) => (r[\"_measurement\"] == \"cpu\" and r[\"_field\"] == \"usage_system\"))")

    //Result is returned as a stream
    val results = influxDBClient.getQueryScalaApi().query(fluxQuery)

    //Example of additional result stream processing on client side
    val sink = results
      //filter on client side using `filter` built-in operator
      .filter(it => "cpu0" == it.getValueByKey("cpu"))
      //take first 20 records
      .take(20)
      //print results
      .runWith(Sink.foreach[FluxRecord](it => println(s"Measurement: ${it.getMeasurement}, value: ${it.getValue}")
      ))

    // wait to finish
    Await.result(sink, Duration.Inf)

    influxDBClient.close()
    system.terminate()
  }
}
                  
Copy

c

                     
package com.influxdb.client.scala.internal

import org.apache.pekko.Done
import org.apache.pekko.stream.scaladsl.{Flow, Keep, Sink, Source}
import com.influxdb.client.InfluxDBClientOptions
import com.influxdb.client.domain.WritePrecision
import com.influxdb.client.internal.{AbstractWriteBlockingClient, AbstractWriteClient}
import com.influxdb.client.scala.WriteScalaApi
import com.influxdb.client.service.WriteService
import com.influxdb.client.write.{Point, WriteParameters}

import javax.annotation.Nonnull
import scala.collection.immutable.ListMap
import scala.concurrent.Future
import scala.jdk.CollectionConverters._

class WriteScalaApiImpl(@Nonnull service: WriteService, @Nonnull options: InfluxDBClientOptions)

  extends AbstractWriteBlockingClient(service, options) with WriteScalaApi {

  override def writeRecord(precision: Option[WritePrecision], bucket: Option[String], org: Option[String]): Sink[String, Future[Done]] = {
    Flow[String]
      .map(record => Seq(new AbstractWriteClient.BatchWriteDataRecord(record)))
      .toMat(Sink.foreach(batch => writeHttp(precision, bucket, org, batch)))(Keep.right)
  }

  override def writeRecords(precision: Option[WritePrecision], bucket: Option[String], org: Option[String]): Sink[Seq[String], Future[Done]] = {
    writeRecords(toWriteParameters(precision, bucket, org))
  }

  override def writeRecords(parameters: WriteParameters): Sink[Seq[String], Future[Done]] = {
    Flow[Seq[String]]
      .map(records => records.map(record => new AbstractWriteClient.BatchWriteDataRecord(record)))
      .toMat(Sink.foreach(batch => writeHttp(parameters, batch)))(Keep.right)
  }


  override def writePoint(bucket: Option[String], org: Option[String]): Sink[Point, Future[Done]] = {
    Flow[Point]
      .map(point => (point.getPrecision, Seq(new AbstractWriteClient.BatchWriteDataPoint(point, options))))
      .toMat(Sink.foreach(batch => writeHttp(Some(batch._1), bucket, org, batch._2)))(Keep.right)
  }

  override def writePoints(bucket: Option[String], org: Option[String]): Sink[Seq[Point], Future[Done]] = {
    writePoints(new WriteParameters(bucket.orNull, org.orNull, null, null))
  }

  override def writePoints(parameters: WriteParameters): Sink[Seq[Point], Future[Done]] = {
    Flow[Seq[Point]]
      // create ordered Map
      .map(records => records.foldRight(ListMap.empty[WritePrecision, Seq[Point]]) {
        case (point, map) => map.updated(point.getPrecision, point +: map.getOrElse(point.getPrecision, Seq()))
      }.toList.reverse)
      .map(grouped => grouped.map(group => (group._1, group._2.map(point => new AbstractWriteClient.BatchWriteDataPoint(point, options)))))
      .flatMapConcat(batches => Source(batches))
      .toMat(Sink.foreach(batch => writeHttp(parameters.copy(batch._1, options), batch._2)))(Keep.right)
  }

  override def writeMeasurement[M](precision: Option[WritePrecision], bucket: Option[String], org: Option[String]): Sink[M, Future[Done]] = {
    Flow[M]
      .map(measurement => {
        val parameters = toWriteParameters(precision, bucket, org)
        Seq(toMeasurementBatch(measurement, parameters.precisionSafe(options)))
      })
      .toMat(Sink.foreach(batch => writeHttp(precision, bucket, org, batch)))(Keep.right)
  }

  override def writeMeasurements[M](precision: Option[WritePrecision], bucket: Option[String], org: Option[String]): Sink[Seq[M], Future[Done]] = {
    writeMeasurements(toWriteParameters(precision, bucket, org))
  }

  override def writeMeasurements[M](parameters: WriteParameters): Sink[Seq[M], Future[Done]] = {
    Flow[Seq[M]]
      .map(records => records.map(record => toMeasurementBatch(record, parameters.precisionSafe(options))))
      .toMat(Sink.foreach(batch => writeHttp(parameters, batch)))(Keep.right)
  }

  private def writeHttp(precision: Option[WritePrecision], bucket: Option[String], org: Option[String], batch: Seq[AbstractWriteClient.BatchWriteData]): Done = {
    writeHttp(toWriteParameters(precision, bucket, org), batch)
  }

  private def writeHttp(parameters: WriteParameters, batch: Seq[AbstractWriteClient.BatchWriteData]): Done = {
    write(parameters, batch.toList.asJava.stream())
    Done.done()
  }

  private def toWriteParameters(precision: Option[WritePrecision], bucket: Option[String], org: Option[String]): WriteParameters = {
    val parameters = new WriteParameters(bucket.orNull, org.orNull, precision.orNull, null)
    parameters.check(options)
    parameters
  }
}

  


                  
Copy

c

                  package influxdbv3

import (
  "context"
  "fmt"
  "io"
  "os"
  "text/tabwriter"

  "github.com/apache/arrow/go/v12/arrow"
  "github.com/InfluxCommunity/influxdb3-go/influx"
)

func QuerySQL() error {
  url := "https://us-east-1-1.aws.cloud2.influxdata.com"
  token := os.Getenv("INFLUX_TOKEN")
  database := os.Getenv("INFLUX_DATABASE")
	
  client, err := influx.New(influx.Configs{
	HostURL: url,
	AuthToken: token,
  })

  defer func (client *influx.Client)  {
	err := client.Close()
	if err != nil {
		panic(err)
	}
  }(client)

  query := `
    SELECT
	  room,
	  DATE_BIN(INTERVAL '1 day', time) AS _time,
	  AVG(temp) AS temp,
	  AVG(hum) AS hum,
	  AVG(co) AS co
	FROM home
	WHERE time >= now() - INTERVAL '90 days'
	GROUP BY room, _time
	ORDER BY _time
`

  iterator, err := client.Query(context.Background(), database, query)

  if err != nil {
    panic(err)
  }

  w := tabwriter.NewWriter(io.Discard, 4, 4, 1, ' ', 0)
  w.Init(os.Stdout, 0, 8, 0, '\t', 0)
  fmt.Fprintln(w, "day\troom\ttemp")

  for iterator.Next() {
	row := iterator.Value()
	day := (row["_time"].(arrow.Timestamp)).ToTime(arrow.TimeUnit(arrow.Nanosecond))
	fmt.Fprintf(w, "%s\t%s\t%.2f\n", day, row["room"], row["temp"])
  }

  w.Flush()
  return nil
}
                  
Copy

c

                  package influxdbv3

import (
  "context"
  "os"
  "fmt"
  "github.com/InfluxCommunity/influxdb3-go/influx"
)

func WriteLineProtocol() error {
  url := "https://us-east-1-1.aws.cloud2.influxdata.com"
  token := os.Getenv("INFLUX_TOKEN")
  database := os.Getenv("INFLUX_DATABASE")
	
  client, err := influx.New(influx.Configs{
	HostURL: url,
	AuthToken: token,
  })

  defer func (client *influx.Client)  {
	err := client.Close()
	if err != nil {
		panic(err)
	}
  }(client)

  record := "home,room=Living\\ Room temp=22.2,hum=36.4,co=17i"
  fmt.Println("Writing record: ", record)
  err = client.Write(context.Background(), database, []byte(record))

  if err != nil {
    panic(err)
  }
  return nil
}
                  
Copy

c

                  using System;
using System.Threading.Tasks;
using InfluxDB3.Client;
using InfluxDB3.Client.Query;

namespace InfluxDBv3;

public class Query
{
  static async Task QuerySQL()
  {
    const string hostUrl = "https://us-east-1-1.aws.cloud2.influxdata.com";
    string? database = System.Environment.GetEnvironmentVariable("INFLUX_DATABASE");
    string? authToken = System.Environment.GetEnvironmentVariable("INFLUX_TOKEN");

    using var client = new InfluxDBClient(hostUrl, authToken: authToken, database: database);
  
    const string sql = @"
      SELECT
        room,
        DATE_BIN(INTERVAL '1 day', time) AS _time,
        AVG(temp) AS temp,
        AVG(hum) AS hum,
        AVG(co) AS co
      FROM home
      WHERE time >= now() - INTERVAL '90 days'
      GROUP BY room, _time
      ORDER BY _time
    ";

    Console.WriteLine("{0,-30}{1,-15}{2,-15}", "day", "room", "temp");
    await foreach (var row in client.Query(query: sql))
    {
      Console.WriteLine("{0,-30}{1,-15}{2,-15}", row[1], row[0], row[2]);
    }

    Console.WriteLine();
  }
}
                  
Copy

c

                  using System;
using System.Threading.Tasks;
using InfluxDB3.Client;
using InfluxDB3.Client.Query;

namespace InfluxDBv3;

public class Write
{
  public static async Task WriteLineProtocol()
  {
    const string hostUrl = "https://us-east-1-1.aws.cloud2.influxdata.com";
    string? database = System.Environment.GetEnvironmentVariable("INFLUX_DATABASE");
    string? authToken = System.Environment.GetEnvironmentVariable("INFLUX_TOKEN");

    using var client = new InfluxDBClient(hostUrl, authToken: authToken, database: database);

    const string record = "home,room=Living\\ Room temp=22.2,hum=36.4,co=17i";
    Console.WriteLine("Write record: {0,-30}", record);
    await client.WriteRecordAsync(record: record);
  }
}

                  
Copy

c

                  client <- InfluxDBClient$new(url = "http://localhost:8086",
                             token = "my-token",
                             org = "my-org")
                            
data <- client$query('from(bucket: "my-bucket") |> range(start: -1h) |> drop(columns: ["_start", "_stop"])')
data
                  
Copy

c

                  client <- InfluxDBClient$new(url = "http://localhost:8086",
                             token = "my-token",
                             org = "my-org")
data <- ...
response <- client$write(data, bucket = "my-bucket", precision = "us",
                         measurementCol = "name",
                         tagCols = c("region", "sensor_id"),
                         fieldCols = c("altitude", "temperature"),
                         timeCol = "time")
                  
Copy

c
Read Documentation for v2 Read Documentation for v3


LOVED BY DEVELOPERS, TRUSTED BY ENTERPRISES

 * 
 * 
 * 
 * 
 * 
 * 
 * 


500M+

Metrics collected daily


MISSION-CRITICAL MONITORING

Real-time data access for query

LOFT ORBITAL


SPACE MADE SIMPLE: HOW LOFT ORBITAL DELIVERS UNPARALLELED SPEED-TO-SPACE WITH
INFLUXDB CLOUD

Read Case Study


65M+

daly events processed


45X

more resource efficient

CAPITAL ONE


"INFLUXDB IS A HIGH-SPEED READ AND WRITE DATABASE. THE DATA IS WRITTEN IN
REAL-TIME, YOU CAN READ IT IN REAL-TIME, AND WHILE READING, YOU CAN APPLY YOUR
MACHINE LEARNING MODEL. SO, IN REAL-TIME, YOU CAN FORECAST AND DETECT
ANOMALIES."

Rajeev Tomer
Sr. Manager of Data Engineering

Read Case Study


50%

lower total cost of ownership


100K

real-time metrics with simplified deployment

TERÉGA


TERÉGA REPLACED ITS LEGACY DATA HISTORIAN WITH INFLUXDB

Lorem ipsum
Lorem ipsum dolor sit amet consectetur.

Read Case Study


65M+

daly events processed


45X

more resource efficient

VOLVO


“WE DECIDED, FROM A MONITORING PERSPECTIVE, THAT WE ARE... GOING WITH A BEST OF
BREED SETUP. SO, WE PUT THE BEST TOOLS IN PLACE, LIKE INFLUXDB FOR METRICS
MONITORING.”

Daniel Putz
DevOps Enablement

Read Case Study


65M+

daly events processed


45X

more resource efficient

WIDEOPENWEST


"I WAS BLOWN AWAY WITH HOW EASY IT WAS TO INSTALL AND CONFIGURE INFLUXDB. THE
CLUSTERING WAS EASY. THE DOCUMENTATION WAS GREAT, AND THE SUPPORT HAS BEEN
SECOND TO NONE."

Dylan Shorter
Engineer III, Software and Product Integration Engineering

Read Case Study


45%

Less equipment downtime


10%

Reduced waste

MAJIK SYSTEMS


FROM REACTIVE TO PROACTIVE: HOW MAJIK SYSTEMS EMBRACED PREDICTIVE MAINTENANCE
WITH INFLUXDB AND TIME SERIES DATA

Dylan Shorter
Engineer III, Software and Product Integration Engineering

Read Case Study


65M+

daly events processed


45X

more resource efficient

JU:NIZ ENERGY


“WITH INFLUXDB CLOUD DEDICATED, THE GREAT THING IS THAT WE DON'T NEED TO THINK
ABOUT DATA STORAGE COSTS OR USAGE ANYMORE BECAUSE DATA STORAGE GETS WAY
CHEAPER.”

Ricardo Kissinger
Head of IT Infrastructure and IT Security

Read Case Study



INFLUXDB IS A G2 LEADER IN TIME SERIES

“InfluxDB is a strong database built specifically for time series data. It has
made working with such data seamless and easy.”
— Verified G2 reviewer

Read reviews



START BUILDING NOW

Get started in minutes for free. Upgrade anytime to get $250 in credit for your
project.

Get InfluxDB Find the right product

PRODUCT & SOLUTIONS

 * InfluxDB
 * InfluxDB Cloud Serverless
 * InfluxDB Cloud Dedicated
 * InfluxDB Clustered
 * InfluxDB Comparison
 * Integrations
 * Data Lake / Warehouse
 * Data Collector
 * Pricing
 * Use Cases
 * Time Series Data
 * Time Series Database
 * Time Series Forecasting
 * Data Warehousing
 * Network Monitoring

DEVELOPERS

 * Guides
 * Blog
 * Customers
 * Support
 * Webinars
 * Documentation
 * Events & Live Training
 * InfluxDB University
 * Community
 * InfluxDB OSS
 * Telegraf
 * AWS
 * Product Integrations
 * Glossary

COMPANY

 * About
 * Careers
 * Partners
 * Newsroom
 * Contact Us
 * Customers

SIGN UP FOR THE INFLUXDATA NEWSLETTER

*
Email




*

By submitting this form, you agree to the Privacy Policy and Cookie Policy.












Submit

548 Market St, PMB 77953
San Francisco, California 94104

FOLLOW US






© 2024 InfluxData Inc. All Rights Reserved.

Legal Security Cookie Policy Comparison




By clicking “Accept All Cookies”, you agree to the storing of cookies on your
device to enhance site navigation, analyze site usage, and assist in our
marketing efforts. Cookie PolicyPrivacy Policy
Accept All Cookies
Reject All
Cookies Settings


PRIVACY PREFERENCE CENTER

When you visit any website, it may store or retrieve information on your
browser, mostly in the form of cookies. This information might be about you,
your preferences or your device and is mostly used to make the site work as you
expect it to. The information does not usually directly identify you, but it can
give you a more personalized web experience. Because we respect your right to
privacy, you can choose not to allow some types of cookies. Click on the
different category headings to find out more and change our default settings.
However, blocking some types of cookies may impact your experience of the site
and the services we are able to offer.
More information
Allow All


MANAGE CONSENT PREFERENCES

PERFORMANCE COOKIES

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and
improve the performance of our site. They help us to know which pages are the
most and least popular and see how visitors move around the site. All
information these cookies collect is aggregated and therefore anonymous. If you
do not allow these cookies we will not know when you have visited our site, and
will not be able to monitor its performance.

FUNCTIONAL COOKIES

Functional Cookies

These cookies enable the website to provide enhanced functionality and
personalisation. They may be set by us or by third party providers whose
services we have added to our pages. If you do not allow these cookies then some
or all of these services may not function properly.

TARGETING COOKIES

Targeting Cookies

These cookies may be set through our site by our advertising partners. They may
be used by those companies to build a profile of your interests and show you
relevant adverts on other sites. They do not store directly personal
information, but are based on uniquely identifying your browser and internet
device. If you do not allow these cookies, you will experience less targeted
advertising.

SOCIAL MEDIA COOKIES

Social Media Cookies

These cookies are set by a range of social media services that we have added to
the site to enable you to share our content with your friends and networks. They
are capable of tracking your browser across other sites and building up a
profile of your interests. This may impact the content and messages you see on
other websites you visit. If you do not allow these cookies you may not be able
to use or see these sharing tools.

STRICTLY NECESSARY COOKIES

Always Active

These cookies are necessary for the website to function and cannot be switched
off in our systems. They are usually only set in response to actions made by you
which amount to a request for services, such as setting your privacy
preferences, logging in or filling in forms. You can set your browser to block
or alert you about these cookies, but some parts of the site will not then work.
These cookies do not store any personally identifiable information.

Back Button


COOKIE LIST



Search Icon
Filter Icon

Clear
checkbox label label
Apply Cancel
Consent Leg.Interest
checkbox label label
checkbox label label
checkbox label label

Reject All Confirm My Choices