New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content.
In the aftermath of Log4Shell, generating software bills of materials and quickly accessing their information will be critical to addressing the new realities of software supply chain vulnerabilities and attacks.
The difficulties and challenges of running Kubernetes multiply as you scale. Here are four things we’ll need to manage multi-cluster orchestration.
The last five years have seen the rise of the cloud data warehouse. What will the next five years bring?
Focus on these engineering best practices to build high-quality models that can be governed effectively.
A bug in the ubiquitous Log4j library can allow an attacker to execute arbitrary code on any system that uses Log4j to write logs. Does yours?
Open source Trivy plugs into the software build process and scans container images and infrastructure-as-code files for vulnerabilities and misconfigurations.
Moving data science into production has quite a few similarities to deploying an application. But there are key differences you shouldn’t overlook.
Kylin was built to query massive relational tables with sub-second response times. A new, fully distributed query engine steps up performance of both cubing and queries.
Developers quickly understood the value of containers for building cloud-native applications, and that the Docker command-line tool was better than all of the bells and whistles they got with PaaS.
Determining the performance metrics that really matter for your application can make life a lot easier for your team and express your standards clearly across the business.
By using the RED metrics—rate, error, and duration—you can get a solid understanding of how your services are performing for end-users.
When performance issues arise, checking the USE metrics—utilization, saturation, and errors—can help you identify system bottlenecks.
Data science toil saps agility and prevents organizations from scaling data science efforts efficiently and sustainably. Here’s how to avoid it.
Legacy networking approaches don’t align with the way that cloud providers create services or access and only introduce more complexity. Move to the cloud, but leave your traditional networking behind.
Like Kubernetes itself, the underlying object storage should be distributed, decoupled, declarative, and immutable.
An overview of the strengths and weaknesses of today’s cloud database management systems.
For any company exploring the potential of the cloud and Kubernetes, adopting infrastructure as code, security as code, and automation will be essential.
Empowering cloud teams with automated policy-as-code guardrails helps them move faster and more securely.
Much of the software we use today is built on re-implemented APIs, like the Java API in question in Oracle v. Google. An Oracle victory would have stopped open-source innovation in its tracks.
Today, companies across every industry are deploying millions of machine learning models across multiple lines of business. Soon every enterprise will take part.
SLAs are for lawyers. Service level objectives not only introduce finer-grained reliability metrics, but also put that telemetry data into the context of user happiness.
Shipping software has always been about balancing speed and quality control. Many great technology companies built their empires by mastering this skill.
Algorithmic biases that lead to unfair or arbitrary outcomes take many forms. But we also have many strategies and techniques to combat them.
Data science can make robotic process automation more intelligent. Robotic process automation make it easier to deploy data science models in production.
We wouldn’t roll our own cloud orchestration or payment processing software. Why are we still building our own authorization infrastructure?
AI has the power to liberate organizations from CRM-related manual processes and improve customer engagement, sales insights, and social networking, for starters.
Grafana Tempo is an open source, easy-to-use, high-volume distributed tracing system that takes advantage of 100% sampling, and only requires an object storage back end.
Time series forecasts are used to predict a future value or a classification at a particular point in time. Here’s a brief overview of their common uses and how they are developed.
The next frontier for data processing is a new platform capable of delivering insights, actions, and value the instant data is born.
Time series analysis involves identifying attributes of your time series data, such as trend and seasonality, by measuring statistical properties.
An open, services-oriented approach has clear advantages for building modular and scalable applications. We should take the same approach to our data.
Visualizing time series data is often the first step in observing trends that can guide time series modeling and analysis.
Apache Pulsar is an open source streaming platform that addresses some important limitations in Kafka, particularly for cloud-native applications.
Time series data key insights in domains ranging from science and medicine to systems monitoring and industrial IoT. Understand time series data and the databases designed to ingest, store, and analyze time series data.
How the fully managed Kafka service can bring peace and simplicity to the lives of those who depend on event streaming infrastructure.
PostgreSQL continues to improve in ways that meet the needs of even the most complex, mission-critical use cases. It also presents certain challenges.
ProxyJump forwards the stdin and stdout of the local client to the destination host, allowing us to set up jump servers without giving them direct SSH access.
At Snowflake, we fully embrace the value of open standards and open source. But we strive to avoid misguided applications of open that create costly complexity instead of low-cost ease of use.
Processes and process automation take many forms. Here’s how to navigate the growing ecosystem of tools for automating everything from simple repetitive tasks to complex custom workflows.
Pulling from container registries is key to ensuring the health and resilience of the CI/CD pipeline. Choose your registry with care.
An in-memory digital integration hub enables flexible, real-time information flow between mainframes and external systems, unlocking mainframe data for digital transformation.
You can avoid command line tedium and simplify access to a fleet of servers by creating a flexible configuration file for your SSH client. Here’s how.
As Moore’s Law loses steam, off-loading data compression, data encryption, low-level data data processing, and other heavy-duty computation tasks to storage nodes makes sense. Here’s how that would work.
9 out of 10 companies have accelerated their cloud adoption in response to the coronavirus pandemic, with corresponding increases in cloud spend—and waste.
By decoupling policy from applications, policy as code allows you to change the coding for policy without changing the coding for apps. Translation: reliability, uptime, and efficiency.
Enterprise-grade explainability solutions provide fundamental transparency into how machine learning models make decisions, as well as broader assessments of model quality and fairness. Is yours up to the job?
The past 12 months have revealed how valuable data science can be while also exposing its limitations. Expect big advances in the year to come.
A standard operating environment can reduce the time it takes to deploy, configure, maintain, support, and manage containerized applications. Let’s get SOEs and containers back together.
Build a React application to track the orbit of the International Space Station using Telegraf, InfluxDB, ExpressJS, and Giraffe.
Aerospike’s Cross-Datacenter Replication with Expressions makes it easy to route the right data at the right time across global applications to meet compliance mandates and reduce server, cloud, and bandwidth costs.
Sponsored Links