Flexible Persistent Data Storage
Flexible Persistent Data Storage incorporates an S3-compatible system, designed to enhance data management within pipeline jobs. If your Resource Package includes Persistent Data Storage, the separate bucket is being created for each your GitLab Integration automatically.
Persistent Data Storage is billed based on its actual utilization as well as CPU and memory, ensuring the cost-effective scalability of your CI/CD process. If the total data stored in all buckets of your account exceeds the amount included in your Resource Package, any GB-seconds used beyond the package's limit will incur per-second charges, calculated at the specific package rate.
Accessing Data Storage
Persistent Data Storage can be accessed in two ways:
The S3-compatible API: It allows for direct interactions with the storage system using standard S3 API calls, which is ideal for external data manipulations or integration with other services.
Direct access from pipeline jobs: This way is offering a seamless approach to data handling and manipulation within the pipeline's context.
Use Cases
GitLab Cache
The storage system includes a pre-configured setup for a standard GitLab cache, accessible at the /cache
directory of the bucket. Managed entirely by Puzl, this feature is designed to enhance pipeline efficiency by optimizing build and deployment times.
Cache is always shared across all the pipeline jobs within the same GitLab Integration, so you don't need to worry about runner concurrency.
Shared Persistent Directories
To address extensive data storage needs, you can use the sharedPersistentMountPoints setting in the runner configurations. This feature allows for the creation of directories on the persistent Flexible Cloud Storage, which are accessible across all containers within all pipeline jobs of a specific runner.
The same persistent directories can be shared among multiple runners within the same Integration, simply by specifying the same mount paths for each runner.
Let's assume, you created two runners inside the same Integration with the following sharedPersistentMountPoints
setting:
spec:
pipelines:
sharedPersistentMountPoints: ["/my/data", "/my-other/data"]
spec:
pipelines:
sharedPersistentMountPoints: ["/my/data"]
In this case:
/my/data
directory will be available across all pipeline jobs of both runners./my-other/data
directory will be available only across the pipeline jobs of the first runner.