Positioning 45 Table 4 GPS error sources and methods for reducing them Residual RMS error in meters by method in meters Error Continental Regional 100 km 50 km RTK 25 km RTK source Int Single GPS SBAS GPS PPP DGPS float Satellite 2 Between 0.0 clock Satellite 2 Between 0.1 orbit Ionosphere 1–10 Between 0.2 Troposphere 0.2–2 Between 0.2 Multipath 0.5–100 Goes down with antenna design, receiver HW quality, . . . Receiver 0.25 Goes down with receiver HW quality noise Typical 5 Between 1.5 one- satellite total smoothed Overall 2.5–5.0 2.0–4.0 0.4–0.8 0.4–0.8 0.2–0.4 0.02–0.2 filtered 65–95% Multi-Sensor Fusion for Robust and Accurate Positioning Concept One of the reasons that GNSS positioning is valuable is that it is one of the rare practical means to achieve globally-referenced position estimates. Other means tend to be less practical, such as registration of live camera images (or lidar point clouds) to a database of globally-referenced images or clouds (computationally expensive, difficult to get the globally-referenced images or clouds in the first place), or the exotic example of celestial navigation with registration of live sky images to known star trajectories (where star visibility is required). As discussed earlier, GNSS suffers from environment dependent errors and outages, which sometimes can be sudden (as in the case of multipath or entering an obstructed environment). This means that sensors that provide uninterrupted output with slowly varying errors (albeit not globally referenced), such as inertial sensors or visual means of detecting position changes, are a beneficial compliment to GNSS sensing. When they are combined, GNSS provides global reference and the relative motion sensing provides a check on, or smoothing of, GNSS errors. Without GNSS, the errors of relative motion sensing grow over time without a limit; without relative motion sensing, GNSS cannot be trusted and leaves periods without position output. With that motivation for combining GNSS with other sensors, this section provides an introduction into practically beneficial GNSS-aiding sensors and algorithms for effectively combining their data with GNSS data.
46 Z. Popovic and R. Miucic Sensors Inertial Inertial sensors directly sense accelerations (that particular sensor is accelerometer) and rotational rates (that is, angular speeds, that particular sensor is gyroscope). A six degree-of-freedom (6 DOF) set of sensing elements provides three accelerations, one in the direction of each of the sensor’s three axes, and three rotational rates, one around each of the sensor’s axes. Such a combination of inertial sensing elements is often referred to as an inertial measurement unit (IMU). That example is a 6 DOF IMU, but there are reduced sets as well. A 3 DOF IMU is common in vehicles to reduce cost because accelerations aligned with vehicle longitudinal and lateral axes combined with the rotation rate aligned with the vehicle vertical axis are more important that data about other axes due to the vehicle usually maintaining approximately planar motion. 6 DOF data is needed when sensor orientation is uncertain or when vehicle orientation variation due to chassis compliance affects integration of data from other sensors. Reference to an “IMU”, as opposed to “accelerometers” and “gyroscopes”, implies at least a certain level of alignment and integration of sensing in different directions, as well as some level of signal processing, and potentially even compensation of sensing errors. Inertial sensing is an excellent complement to using GNSS data: GNSS position- ing data is absolutely (globally) referenced while integrating inertial data provides relative (change of) position; GNSS data often has an accurate mean but suffers from hard-to-predict sudden bias jumps while inertial data has a continuously drifting mean with a well statistically characterized (although not necessarily predicable) bias behavior; in normal operation GNSS data has outages due to sky obstructions while inertial data is always available (in an operational sensor, and they tend to stay that way). Due to this complimentary nature, GNSS and IMU data can be effectively used together in a sensor fusion filter (of the kinds described later) were IMU is a robust core that provides continuous output, while consistent GNSS data, when available, can be used to estimate IMU biases. All inertial sensor outs are inherently contaminated with errors, mainly stemming from continuously changing bias (characterized by the “bias instability” metric), noise (characterized by “noise density”), varying scaling factors (for scaling output voltage to physical units), and misalignment between axes of individual sensing elements. All these mechanisms are highly affected by temperature variations, and vary across production series. They are also changed by the process of mounting a sensor on an electrical circuit board and by the flex and vibration of the board during vehicle operation. Some of these, as for example some temperature variations, can be characterized before the sensor is put in use, stored in tables, and subtracted during operation (this is off-line calibration). The remaining need to be estimated during sensor use through sensor fusion (on-line calibration). The size and predictability of inertial sensor errors has a strong correlation with price, which constrains the technology used in the sensing element. Inertial sensor price has a
Positioning 47 Fig. 24 LiDAR huge range from cents (100s of millions of sensor production volume with micro- electro-mechanical (MEMS) technology) to hundreds of thousands of dollars (low volume for aerospace applications with laser or fiber optic technology). However, MEMS technology continue to improve in performance with decrease in price so that currently few to tens of dollars can provide adequate inertial sensing for cooperating vehicle applications in fusion with other sensors. Ranging Ranging sensors are those that provide a direct measurement of distance to an object. Even a GNSS receiver is a ranging sensor in the sense that it internally measures distances to satellites, but due to the elaborate design used to achieve those measurements (think of the all the satellites) they are not “direct” measurement and thus GNSS is in a category of its own. Furthermore, a GNSS receiver combines its ranging measurements into position and velocity estimates. Lidar is the best example of a ranging sensor that has proven practical in position estimation. It provides precise and repeatable range measurements (in most conditions), which, when limited to measuring ranges to static objects, can be used to estimate the relative motion of the host vehicle: Any change in the measurement of range to a static object can be assumed to be due to the host motion with respect to that object. Lidar produces its measurements by sending out light pulses and converting the turnaround times of their reflections from objects into distances (Fig. 24).
48 Z. Popovic and R. Miucic Direction of Xt Scene Object Right Camera travel Xt-1 Point in Time Image Plane Left Camera Xt-2 Image Plane Xt-3 XL Epipolar Plane XR Left Camera Right Camera Right Camera Focal Point Epipolar Line Focal Point Fig. 25 Stereo disparity Vision Vision is referring to use of images obtained from one or more cameras and processing them with algorithms to extract geometric information. The algorithms can be of so-called classical kind where signal processing or filtering techniques are applied to images to extract geometric features such as lines and points. Or they can be the applications of deep artificial neural networks to extract those geometric features, which have become practical in the last several years. In any either case, the first result is features that are defined, as equations or coordinates, in two- dimensional image space, which, to be useful for positioning, need to be converted to three-dimensional world coordinates. Images from two cameras looking at the same scene (known as a stereo camera pair) can be used to convert image features into world space using the concept known as stereo disparity (Fig. 25), where the difference between the horizontal pixel locations of a real world point seen in two images is inversely proportional to the distance of that point to the plane of the cameras. This distance is the depth of the real point into the image. The two image coordinates and the depth allow placing the imaged point into three-dimensional world space. The quality of this results depends on the ability to correctly identify the same point in the two images and on having adequate camera separation in order to estimate distance of faraway objects. Alternatively, size clues can be used to determine the depth dimension. Farther away objects are proportionally smaller in an image. Having prior information on expected real world size of objects detected in images can then be used to
Positioning 49 calculate their distance. The quality of this result depends on accurate real world size assumption and accurate image size detection. As with lidar, once features on imaged objects known to be static are converted to real world coordinates, they can be tracked over time to calculate their velocities with respect to the host vehicle, which are opposite to the vehicle velocity with respect to the world. This velocity is a relative measurement that can be combined with GNSS positions (or even pseudoranges) to calculate a more robust and accurate position estimate. Maps When it can be assumed that vehicle travel is constrained to a map, then the location estimate can be helped by constraining it to trajectories allowed by the map. In addition, the location estimate can be improved by constraining it in a way that maximizes the matching of features that other sensors report, such as cameras or lidars, and the corresponding real-world features previously accurately located in the map. This prior mapping can be done either using highly instrumented vehicles (which are accurate, but expensive and thus few) or large amount of data from normal production vehicles (where sensor inaccuracies are compensated using large amount of data). Algorithms The algorithms for combining (fusing) data from multiple sensors for purposes of better positioning can be broadly divided into two groups: filtering and optimization. In filtering, at each time step all the available new sensor data at that particular time is used to derive the best estimate, which is then propagated using a vehicle motion model to the next time the sensor data is available. In this way, at each time step there is a position estimate, and each position estimate implicitly includes the benefit of all the prior data. Effective and popular filters that will be introduced in the sections that follow are Kalman Filter and Particle Filter. Optimization approaches are often applied in post-processing (not in vehicle during operation) to derive the best possible estimate given the data. They employ an optimization equation, specific for the estimation problem, that allows solving for the parameters of interest while minimizing an error metric over all the available data simultaneously. Optimization approaches are often limited to post-processing because of their high computation demands, but can be applied in real-time (in vehicle) as well, by limiting the data they operate on to a window of time. Bundle Adjustment is a state-of-the-art approach that will be introduced in a section that follows.
50 Z. Popovic and R. Miucic Kalman Filter Kalman Filter ([23–25]) estimates a parameter (or state), or typically a state vector, based on relevant measurement data. It combines multiple measurements over time, and from multiple sensors. It does so by weighing measurements using their known statistics, and by employing a model that tracks the estimate (state) change over time between those measurements. That is, the data from sources that we are more confident about gets more weight. The essence of the Kalman Filter is the Kalman gain, usually denoted as K, which is what weights (multiples) different data getting combined in the filter. It operates under the assumption that all data sources (measurement values, model equations) have errors that can be well represented as Gaussian distributions parameterized using a mean and a variance. A smaller variance means that the data is more tightly clustered around the true value, and thus more trustworthy (represented as the narrower dashed peak in Fig. 26). Then it intuitively makes sense that the gain would weigh more heavily the data with smaller variance, and that the estimate combining multiple data sources would tend toward those with smaller variances (the bold peak in Fig. 26 is between the two data peaks, but closer to the narrower one). Although the following equation would not be a part of a Kalman filter, the expressions in the braces are a simple mathematical representation of this principle and are behind the result in Fig. 26. xcombined = σI MU 2 xGP S + σGP S 2 xI MU (3) σGP S 2 + σI MU 2 σGP S 2 + σI MU 2 An actual Kalman filter consists of two steps: prediction (or time update) and measurement update. In the prediction step, the estimate from the previous time step is brought to current time using equations that model the state change over time. Prediction step is described with the following. The state vector x, has multiple (n) dimensions (positions, velocities, etc.) that change dynamically. The state is modeled using the linear stochastic difference equation (4) xk = Axk−1 + Buk + wk−1 (4) where xk is the n × 1 state vector of the k-th state, A is the n × n matrix that relates two consecutive states in absence of the control input and noise, u is the optional l × 1 control input (assumed to be zero here); B is the n × l matrix that relates the control input to the state (also zero here); w is the n × 1 process white noise with normal probability distribution, mean of 0 and covariance Q. The uncertainty in the model is represented using the estimate error covariance matrix, P. P has two versions at each step k: one just before the measurement is − taken into account (a priori), P k , and one after the measurement is included (a posteriori), Pk. The estimate error covariance is updated using (5).
Positioning 51 f(x) σcombined σIMU σGPS xGPS xcombined xIMU x Fig. 26 Combining measurements based their statistics [From Maybeck, P. S. “Stochastic Models, Estimation, and Control,” Vol. 1, Page 11, New York: Academic Press, 1979] P k = AP −k AT + Qk (5) In the measurement step, the measurement values Zk are compared (subtracted by) the measurement values H kxk− that would be predicted using the measurement model equations, and then the result is multiplied by the Kalman gain Kk: xk = xk− + Kk Zk − H kxk− , (6) where H is the m × n matrix that relates the state to the measurement. The measurement model relates the measured quantities to those that are being estimated using linear equations that in matrix form appear as zk = H kxk + vk, (7) where z is the m × 1 measurement vector, H is the m × n matrix that relates the state to the measurement, and ν is the m × 1 measurement noise that is white with normal probability distribution, mean of 0, and covariance R. The uncertainty in the measurement model is captured using the measurement error covariance matrix, R. The Kalman weighs the contribution of the new sensor data based on the relationship between confidence expressed in the state estimate via P and the confidence expressed in measurement via R:
52 Z. Popovic and R. Miucic Kk = P −k H kT H kP k−H kT H kT −1 (8) . The derivation of the filter proves that it is optimal under certain conditions [23–25]. The conditions are: the equations that capture the state changes over time are linear, the equations that reflect the relationship between ideal sensor measurements and the estimated state are linear, and all non-ideal aspects of both the state and measurement models are captured by Gaussian noise models. No real system has these characteristics, but many are sufficiently close to make the Kalman Filter effective. In cases where linearity assumptions break down, there are modified Kalman Filter formulations to account for non-linearities. The most commonly used is the Extended Kalman Filter, but Unscented Kalman Filter is also common. Due to highly non-linear equations that involve ranges to satellites and vehicle motion, Extended Kalman Filter is the industry standard for GNSS-Inertial navigation systems. Particle Filter Unlike the Kalman Filter, which produces a single new estimate at each execution, the Particle Filter at each execution produces a probability distribution. This is its key advantage: when faced with ambiguous measurements that support multiple widely diverged estimates, it is not forced to choose one estimate like the Kalman Filter, but can maintain multiple options in terms of multiple modes of the probability distribution that it outputs. For example, sensor information could be sufficient to limit the location to the approach of any one of several intersections or to any one of several stretches of road, but it might not be sufficient to confidently pinpoint which one of them. In a scenario like this, the classical Kalman Filter would still be forced to arrive at one solution, either by rejecting some measurements or options based on some criteria, or (worse) blending them all together to estimate something in-between (which is less likely than either one of them alone). Particle Filter, on the other hand, would maintain all estimate possibilities with at least some level of probability as indicated by sensors data, represented as a multimodal (multi- peak) probability surface. Then when additional sensor data becomes available to disambiguate between different modes, the probability distribution collapses to a single confident mode. Since the Kalman Filter is forced to reject all but one option before the sufficient sensor data becomes available, it is likely to have converged on an incorrect estimate and then have no built-in way of returning to the correct one. In practice though, the Kalman Filter can have additional logic to avoid producing an estimate when sensor data is insufficient by looking out for discrepancies, then wait for sufficient data, and then be able to converge toward the correct estimate. Or alternatively, in case of ambiguity, multiple Kalman Filters can be created to track each of different probable estimates. The Particle Filter represents its estimate probability distribution using a combi- nation of thousands of weighted estimates, called particles. Each particle is typically
86 B. Brecht and T. Hehn The following two definitions are essential to understanding the SCMS design: • We define an SCMS component intrinsically-central if it can have exactly one distinct instance for proper functioning. • We define a component to be central if we choose to have exactly one distinct instance in the considered instantiation of the system. Distinct instances of a component have different identifiers and do not share cryptographic materials. While there is only one SCMS, components that are not central can have multiple instances. We assume that all components have load- balancing mechanism if needed. Overview Figure 1 shows the structure of the SCMS. Each component of the SCMS is depicted by a separate box. Components with a bold bounding box are intrinsically central. Components marked with an ‘X’ in the upper left corner are providing general V2X functionality. Examples include the Root CA and Intermediate CA. Components marked with ‘V/I’ in the upper left corner are providing separate V2V and V2I functionality. Examples include the Pseudonym Certificate Authority (PCA), the Registration Authority (RA) or Onboard Equipment (OBE). An “I” marks components, which are only involved in V2I communications, such as Road- Side Equipment (RSE). There are four types of connections in the SCMS: • Solid lines represent regular, secure communications, including certificate bun- dles. • Dashed lines represent the credentials chain of trust. This line shows the chain of trust for signature verification. Note that this line is unique in the way that it does not imply data transfer between the two connected components. Enrollment certificates are verified against the Enrollment Certificate Authority (ECA) certificate, pseudonym, application, and identification certificates are verified against the PCA certificate, and certificate revocation lists are verified against the Certificate Revocation List (CRL) Generator (part of the Misbehavior Authority, MA) certificate. • Dash-Dotted lines represent Out-of-Band communications, e.g., the line between and RSE and the Device Configuration Manager (DCM). We present more detailed information in section “Bootstrapping”. • Lines marked with ‘LOP’ go through the Location Obscurer Proxy (LOP). The Location Obscurer Proxy is an anonymizer proxy stripping all location-related information from requests. All online components communicate with each other using a protected and reli- able communication channel, utilizing protocols such as those from the Transport Layer Security (TLS) suite [13]. There is an air-gap between some components and
A Security Credential Management System for V2X Communications 87 SCMS Manager X V/I Policy Technical X X X Elector A Elector B Elector C V/I Policy X Generator Root CA X Intermediate CA All SCMS Components V/I V/I V/I V/I ECA PCA CRL Certification Services V/I Broadcast RA CRL V/I V/I V/I LA1 LA2 MA V/I to DCM CRL CRL from V/I Store CRL Store DCM LOP LOP LOP I V/I I V/I I V/I RSE OBE RSE OBE RSE OBE Connections Legend: Components Legend: V/I Component with separate V2V + V2I information Credentials V2I and V2V functionality Out-of-Band communications LOP Communications through LOP X Component with V2X functionality Intrinsically central component Fig. 1 SCMS architecture the rest of the system (e.g., Root CA, Electors). Data is encrypted and authenticated at the application layer if it is forwarded via a SCMS component that is not intended to read that data (e.g., data generated by the linkage authority that is addressing the Pseudonym CA but routed via Registration Authority). It is most beneficiary to review Fig. 1 from left to right. We show three pairs of RSEs and OBEs. These are of the same type and used to illustrate different use cases of the SCMS. The leftmost pair is used to demonstrate the connections required for bootstrapping, the pair in the middle shows the connections required for certificate provisioning and misbehavior reporting, and the rightmost pair shows the connections required for retrieval of the CRL via the CRL Store.
88 B. Brecht and T. Hehn Components The following components are part of the SCMS. We list them from top to down. • SCMS Manager: Ensures efficient and fair operation of the SCMS, defines orga- nizational and technical policies, and sets guidelines for reviewing misbehavior and revocation requests to ensure that they are correct and fair according to procedures. • Electors: Electors represent the center of trust of the SCMS. Electors sign ballots that either endorse or revoke an RCA or another elector. The SCMS Manager distributes ballots to all SCMS components, including devices, to establish trust relationships in RCAs and electors. An elector has a self-signed certificate, and all entities of the system will implicitly trust the initial set of electors. Therefore, all entities have to protect electors against unauthorized alteration, once they installed the initial set. • Root Certificate Authority (RCA): An RCA is the root at the top of a certificate chain in the SCMS and hence a trust anchor in a traditional PKI sense. It issues certificates for Intermediate CAs as well as SCMS components like Policy Generator and Misbehavior Authority. An RCA has a self-signed certificate, and a ballot with a quorum vote of the electors establishes trust in an RCA. See section “Elector-Based Root Management” for further explanation. An entity verifies any certificate by verifying all certificates along the chain from the certificate at hand to the trusted RCA. This concept is called chain-validation of certificates and is the fundamental concept of any PKI. If an RCA and its private key are not secure, then the system is potentially compromised. Due to its importance, an RCA is typically off-line when not in active use. • Policy Generator (PG): Maintains and signs updates of the Global Policy File (GPF), which contains global configuration information, and the Global Certificate Chain File (GCCF), which contains all trust chains of the SCMS. • Intermediate CA (ICA): This component serves as a secondary Certificate Authority to shield the root CA from traffic and attacks. The Root CA issues the Intermediate CA certificate. • Enrollment CA (ECA): Issues enrollment certificates, which act as a passport for a device to authenticate against the RA, e.g., when requesting certificates. Dif- ferent ECAs may issue enrollment certificates for different geographic regions, manufacturers, or device types. • Device Configuration Manager (DCM): Attests to the Enrollment CA (ECA) that a device is eligible to receive enrollment certificates, and provides all relevant configuration settings and certificates during bootstrapping. • Certification Services: Specifies the certification process and provides informa- tion on which types of devices are certified to receive digital certificates. • Device: An end-entity (EE) unit that sends or receives BSMs, e.g., an OBE, an after-market safety device (ASD), an RSE, or a Traffic Management Center (TMC) backend (not depicted in the figure).
A Security Credential Management System for V2X Communications 89 • Pseudonym CA (PCA): Issues short-term pseudonym, identification, and appli- cation certificates to devices. Individual PCAs may be, e.g., limited to a particular geographic region, a particular manufacturer, or a type of device. • Registration Authority (RA): Validates and processes requests from the device. From those, it creates individual requests for pseudonym certificates to the PCA. The RA implements mechanisms to ensure that revoked devices are not issued new pseudonym certificates, and that devices are not issued more than one set of certificates for a given time period. In addition, the RA provides authenticated information about SCMS configuration changes to devices, which may include a component changing its network address or certificate, or relaying policy decisions issued by the SCMS Manager. Additionally, when sending pseudonym certificate signing requests to the PCA or forwarding information to the MA, the RA shuffles the requests/reports to prevent the PCA from taking the sequence of requests as an indication for which certificates may belong to the same batch and the MA from determining the reporters’ routes. • Linkage Authority (LA): Generates pre-linkage values, which are used to form linkage values that go in the certificates and support efficient revocation. There are two LAs in the SCMS, referred to as LA1 and LA2. The splitting prevents the operator of an LA from linking certificates belonging to a particular device. For further explanation, see the section titled “Organizational Separation”. • Location Obscurer Proxy (LOP): Hides the location of the requesting device by changing source addresses, and thus, prevents linking of network addresses to locations. • Misbehavior Authority (MA): Processes misbehavior reports to identify potential misbehavior or malfunctioning by devices, and, if necessary, revokes and adds them to the CRL. It also initiates the process of linking a certificate identifier to the corresponding enrollment certificates and adding them to the RA’s internal blacklist. The MA contains two subcomponents: Global Misbehavior Detection, which determines which devices are misbehaving; and CRL Generator (CRLG), which generates, digitally signs and releases the CRL to the outside world. • CRL Store (CRLS): A simple pass-through component that stores and distributes CRLs. • CRL Broadcast (CRLB): A simple pass-through component that broadcasts the current CRL through, e.g., RSEs or satellite radio system. Note that the MA, PG, and the SCMS Manager are the only intrinsically central components of the SCMS. Organizational Separation One goal of the SCMS design is to provide an acceptable level of privacy for V2X safety communication applications using pseudonym certificates. Within the SCMS design, different components provide different logical functions. Dedicated
90 B. Brecht and T. Hehn organizations have to provide some of these logical functions to prevent a single organization from being able to determine which pseudonym certificates belong to a device. This capability would allow an attacker to track a vehicle by combining this information with captured over-the-air messages. This section identifies which SCMS components must be organizationally separate. The general rule is that the same organization cannot run two components if the combined information held by the components would allow an insider to determine which pseudonym certificates belong to a device. This results in the following specific requirements for organizational separation: • PCA and RA: If one organization ran these two components, the organization would know which pseudonym certificates had been issued to which device. The reasoning behind this is that the RA knows the requests to which certificates correspond, and the PCA knows the corresponding pseudonym certificates. • PCA and one of the LAs: If one organization ran the PCA and either (or, both) of the LAs, it could link all pseudonym certificates (from any batch) issued to any device since LA knows a set of pre-linkage values that go into the certificate set, and PCA sees these pre-linkage values at certificate generation time. • LA1 and LA2: If one organization ran both the LAs, it would know all the pre-linkage values and XOR them opportunistically to obtain the linkage values, which appear in plaintext in pseudonym certificates. This would allow identification of which pseudonym certificates belong to the same device. • LOP and (RA or MA): The LOP hides the device’s location from the RA and the MA, respectively, and no single organization should jointly run these components. • MA and (RA, LA, or PCA): No single organization should run a combination of the MA and any of the RA, the LA or the PCA. If combined, the MA could circumvent restrictions during misbehavior investigation and learn more infor- mation than necessary for misbehavior investigation and revocation purposes. When other certificate types than pseudonym certificates are generated, no specific organizational separation is required. SCMS Use Cases The SCMS supports four primary use cases: device bootstrapping, certificate pro- visioning, misbehavior reporting, and global misbehavior detection and revocation. One of the cryptographic concepts used to make certificate requests more efficient is the Butterfly Key Expansion algorithm. It reduces upload size, allowing requests to be made when there is only suboptimal connectivity, and reduces the computational effort of the device to calculate the keys. A detailed description of the Butterfly Key Expansion algorithm is available in [8].
A Security Credential Management System for V2X Communications 91 Bootstrapping The life cycle of a device starts with Bootstrapping. It equips the device with all the information required to communicate with the SCMS and with other devices. It is required that correct information is provided to the device during bootstrapping and that the CAs issue certificates only to certified devices. Any bootstrapping process is acceptable that results in this information being established securely. The bootstrapping process includes a device, the DCM, the ECA and the certification services component. We assume that the DCM has established com- munication channels with other SCMS components, such as the ECA or the policy generator, and that it will communicate with the device to be bootstrapped using an out-of-band channel in a secure environment. Bootstrapping consists of two operations: initialization and enrollment. Further, we touch upon different forms of re-enrollment and the motivation behind them. Initialization is the process by which the device obtains certificates it needs to be able to trust received messages. Enrollment is the process by which the device obtains an enrollment certificate that it will need to sign messages to the SCMS. Information received in the initialization process includes 1. The certificates of all electors, all root CAs, and possibly of intermediate CAs as well as PCAs to verify received messages 2. The certificates of the misbehavior authority, policy generator, and the CRL generator to send encrypted misbehavior reports and verify received policy files and CRLs. In the enrollment process, the device receives information required to interact with the SCMS and actively participate in the V2X communications system. This includes 1. the enrollment certificate to authenticate with and sign messages to the RA, 2. the certificate of the ECA to verify the enrollment certificate, and 3. the certificate of the RA and other information necessary to connect to the RA. During the enrollment process, the certification services provide the DCM with information about device models, which are eligible for enrollment. The DCM must receive trustworthy information about the type of the device to be enrolled to ensure only eligible devices are enrolled. Figure 2 shows an exemplary enrollment process with five steps: (1) The DCM accepts the request of the device, (2) checks the device type certification with the certification services, (3) + (4) retrieves the enrollment certificate from the ECA, and (5) forwards the enrollment certificate along with all other relevant information to the device. Re-enrollment Re-enrollment of a device might be necessary due to several reasons. We define re-enrollment as either of the following:
92 B. Brecht and T. Hehn V/I ECA Step 3: Forward request Step 4: Issue enrollment certificate V/I Step 2: Check device type certification V/I Certification DCM Services Step 1: Send request Step 5: Reply Public key Enrollment certificate Device type ECA and RA certificates V/I Secure Device Environment Fig. 2 Enrollment process • Reinstatement: A device is reinstated if the original enrollment certificate is reinstated by removing it from the RA’s blacklist. • Re-bootstrapping: A device is re-bootstrapped if the device is wiped and then bootstrapping is executed to issue a new enrollment certificate. This is similar to a factory reset and requires a secure environment. • Re-issuance: A device is reissued if the public key of the enrollment certificate is reused to issue a new enrollment certificate. The device keeps all pseudonym certificates and uses the same butterfly key parameters. • Re-establishment: A device is re-established if the device’s integrity can be verified remotely and the device then requests a new enrollment certificate using the old enrollment certificate to authenticate the request. This does not necessarily call for a secure environment. Note that we strongly suggest to only using re-bootstrapping and re- establishment but no reinstating or re-issuing. Device re-enrollment is useful in the following scenarios: • Change of cryptography: Advances in cryptanalysis might make it necessary to replace the underlying cryptographic algorithms. In the next decades, this will likely be the case to introduce post-quantum cryptography algorithms. In this case, devices need to receive updated firmware, ideally over-the-air, and then request new enrollment certificates that use the updated cryptographic scheme. • Device revocation via CRL: Re-bootstrapping is the only option if the MA revoked a device and listed it on the CRL. • Enrollment certificate rollover: It is good practice and a security requirement in the SCMS to limit the lifespan of enrollment certificates that motivates the need for a rollover to a new enrollment certificate over-the-air, which is equivalent
A Security Credential Management System for V2X Communications 93 to re-establishing a device. A device can request a new enrollment certificate if the MA has not revoked the current enrollment certificate. The device creates a new private/public key pair and includes the public key in its certificate rollover request to RA. The device digitally signs the rollover request with the private key of its current enrollment certificate. The RA verifies the request, forwards it to the ECA, and the ECA, in turn, signs the requested enrollment certificate containing the new public key. • Device revocation due to a revoked ECA: If an ECA has been revoked, such that a device now holds an invalid enrollment certificate, re-enrollment is necessary as well. As a standard approach, a device should be re-bootstrapped. A re- establishment of devices that hold an enrollment certificate from a revoked ECA creates the risk to issue a new enrollment certificate to a malicious device. • Root CA and ICA revocation: If a Root CA certificate is revoked, it is assumed that a new Root CA certificate is established by means of electors (see section “Elector-Based Root Management”) and all relevant components have been equipped with a new certificate under the new Root CA certificate. ECAs need to be re-certified, and the SCMS Manager has to give permission to re-establish devices that hold an enrollment certificate issued by a re-certified ECA if there is evidence that there was no ECA compromise. Otherwise, devices need to be re-bootstrapped. Certificate Provisioning The certificate provisioning process for OBE pseudonym certificates is the most complicated provisioning process in the SCMS because it has to protect end-user privacy and minimize the required computational effort on the resource-constrained device. In the following, we focus on the pseudonym certificate provisioning process since the provisioning of other certificate types is a subset regarding functionality. Figure 3 illustrates this process, which is designed to protect privacy against inside and outside attackers. The SCMS design ensures that no individual component knows or creates a complete set of data that would enable tracking of a vehicle. The RA knows the enrollment certificate of a device that requests pseudonym certificates, but even though the RA delivers the pseudonym certificates to the device, it is not able to read the content of those certificates as the PCA encrypts them to the device. The PCA creates each pseudonym certificate individually, but it does not know the recipient of those certificates, nor does it know which certificates the RA delivers to the same device. The LAs generate masked hash-chain values, and the PCA embeds them in each certificate as so-called linkage values. The MA unmasks them by publishing a secret linkage seed pair on the CRL, which efficiently links and revokes all future pseudonym certificates of a device. However, a single LA is not able to track devices by linking certificates or to revoke a device, but both LAs, the PCA, and the RA need to collaborate for the revocation process. Privacy mechanisms in the SCMS include:
94 B. Brecht and T. Hehn Fig. 3 Linkage value calculation • Obscuring Physical Location: The LOP obscures the physical location of an end- entity device to hide it from the RA and the MA. • Hiding Certificates from RA: The butterfly key expansion process [8, 9] ensures that no one can correlate the public key seeds in requests with the resulting certificates. Encrypting the certificates to the device prevents the RA from relating certificates with a device. • Hiding Receiver and Certificate Linkage from PCA: The RA expands incoming requests using Butterfly keys and then splits these requests into requests for individual certificates. It then shuffles requests of all devices before sending them to the PCA. This prevents the PCA from learning whether any two certificate requests belong to the same device, which would enable the PCA to link certificates. The RA should have configuration parameters for shuffling, e.g., the POC implementation shuffles 10,000 requests, or a day’s worth of requests, whatever is reached first. We explain the concept of linkage values as it is essential to understand the certificate provisioning process, which we will explain afterward. Linkage Values For any set of pseudonym certificates provided to a device, the SCMS inserts linkage values in certificates that can be used to revoke all of the certificates with validity
Search
Read the Text Version
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- 190
- 191
- 192
- 193
- 194
- 195
- 196
- 197
- 198
- 199
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 209
- 210
- 211
- 212
- 213
- 214
- 215
- 216
- 217
- 218
- 219
- 220
- 221
- 222
- 223
- 224
- 225
- 226
- 227
- 228
- 229
- 230
- 231
- 232
- 233
- 234
- 235
- 236
- 237
- 238
- 239
- 240
- 241
- 242
- 243
- 244
- 245
- 246
- 247
- 248
- 249
- 250
- 251
- 252
- 253
- 254
- 255
- 256
- 257
- 258
- 259
- 260
- 261
- 262
- 263
- 264
- 265
- 266
- 267
- 268
- 269
- 270
- 271
- 272