frontier.cern.ch Open in urlscan Pro
2001:1458:201:e4::100:8d1  Public Scan

URL: https://frontier.cern.ch/
Submission: On November 25 via api from US — Scanned from CH

Form analysis 0 forms found in the DOM

Text Content

WELCOME TO FRONTIER

The Frontier distributed database caching system distributes data from data
sources to many clients around the world. The name comes from "N Tier" where N
is any number and Tiers are layers of locations of distribution. The protocol is
http-based and uses a RESTful architecture which is excellent for caching and
scales well. The Frontier system uses the standard web caching tool squid to
cache the http objects at every site. It is ideal for applications where there
are large numbers of widely distributed clients that read basically the same
data at close to the same time, in much the same way that popular websites are
read by many clients.

For Large Hadron Collider (LHC) projects, the database is located at the CERN
Tier 0, and the Frontier system distributes data to to all the Tier 1, Tier 2,
and Tier 3 sites around the world. Within each site the Frontier system also
distributes the data to all the worker nodes (typically hundreds or thousands)
that need to read the data.

The Frontier system was developed for the CDF experiment at Fermilab and is
heavily used there. Fermilab also adapted Frontier for the CMS experiment at the
LHC at CERN and it is used to access "conditions data" (mainly detector
calibrations and alignments) at all its sites worldwide. The ATLAS experiment at
the LHC later also adopted Frontier to distribute conditions data to all of its
sites. The JUNO neutrino project based in China also adopted Frontier much
later.

Even though this web site is hosted at CERN for convenience of the LHC projects,
Frontier is maintained by Fermilab. The LHC projects choose the level of
abstraction between client and server to be SQL queries, but the Frontier system
is designed with plugins so applications can put more of the logic on the server
side, and that is how CDF uses it. There is also a plugin that reads from files
on the local disk or other http web servers. The main advantage of using
Frontier instead of directly using a http web server and squid proxy is that
years of experience have resulted in Frontier supplying many features that are
important for robust and flexible operation on computing grids. A summary of the
features and components of the Frontier system is available in this overview
webpage.

--------------------------------------------------------------------------------

Source code and other project information for frontier client (implemented in C
and C++) and frontier server (implemented as a java tomcat servlet) is freely
available as open source, mostly under the Fermilab Fermitools license (a BSD
license). The Frontier project maintains a squid distribution that has some big
fixes and pre-configuration for supporting the Frontier application. Here's a
brief introduction to the CMS Frontier architecture.

--------------------------------------------------------------------------------

These are the CMS Offline Frontier monitoring systems:
 * Response times and queues on central servers (maxthreads)
 * Availability Status of CERN central servers and T0 squids (Service
   Availability)
 * Availability status at grid sites worldwide (SAM)
 * Condition database requests monitoring (ElasticSearch, requires login)
 * Overall status

These are the ATLAS Frontier monitoring systems:
 * Response times and queues on central servers (maxthreads)
 * Availability Status of ATLAS launchpad servers (Service Availability)
 * Availability status at grid sites worldwide (SAM)
 * Condition database requests monitoring (ElasticSearch, requires login)

This is the JUNO Frontier monitoring system:
 * Response times and queues on central servers (maxthreads)
 * Availability Status of IHEP launchpad servers (Service Availability)

The monitoring for the squids that back the Frontier infrastructure are now
federated under the WLCG banner. The WLCG Squid Monitoring service is performing
monitoring of squids for CMS, ATLAS, JUNO and CVMFS (a non-Frontier related WLCG
service).





For general discussions about Frontier subscribe to the frontier-talk mailing
list. If you don't have a CERN account, on the login screen click on
Create/Check your account and register for a Lightweight Account in the right
column. Once you're logged in, click on the 'Join/leave group' link to the right
of the frontier-talk search result. Discussion also happens in the CERN
Mattermost Frontier team. If you don't have a CERN account for that, click on
"CERN Single Sign-On" and use "Guest access" in the lower right, click on
"Register", and fill in your information. After verifying your email address, go
back to the original link, use "Click here to sign in" and then on "Guest
access" again, but this time login with your email and password. If you don't
already have a Mattermost app you might also want to install that. If you have
support questions for CMS or ATLAS Frontier, send a message to one of these:

(these are images to prevent spam so you can't click on them or copy/paste them,
sorry). Some publications about Frontier:
 * A CHEP 2008 paper on the use of Frontier in CMS.
 * A CHEP 2009 paper describing its efficient cache consistency strategy .
 * A CHEP 2010 paper on RESTful protocols for High Energy Physics with Frontier
   as the main example.
 * A CHEP 2012 paper on operational Experience with the Frontier System in CMS,
   including a description of the monitoring systems.
 * A CHEP 2013 paper on security in CVMFS and Frontier.
 * A CHEP 2016 paper on WLCG Web Proxy Auto Discovery (WPAD).
 * A CHEP 2018 paper on caching some Frontier and CVMFS traffic in Cloudflare.
 * A CHEP 2020 paper on extending WLCG WPAD for dynamically registered squids.
 * There is also an article about Frontier in International Science Grid This
   Week, issue 173 of 5 May 2010.