Skip to main content

Flexible Persistent File Storage

Flexible Persistent File Storage incorporates an S3®-compatible system, designed to enhance data management within jobs. If your Plan includes Persistent File Storage, a separate bucket is being created for each your GitLab Integration automatically.

Persistent File Storage is billed based on its actual utilization as well as CPU and memory, ensuring the cost-effective scalability of your CI/CD process. If the total data stored in all buckets of your account exceeds the amount included in your Plan, any GB-seconds used beyond the package's limit will incur per-second charges, calculated at the specific package rate.

Persistent File Storage can be accessed in two ways:

  • The S3®-compatible API: It allows for direct interactions with the storage system using standard S3® API calls, which is ideal for external data manipulations or integration with other services.

  • Direct access from jobs: This way is offering a seamless approach to data handling and manipulation within the pipeline's context.

The storage system includes a pre-configured setup for a standard GitLab cache, accessible at the /cache directory of the bucket. Managed entirely by Puzl, this feature is designed to enhance pipeline efficiency by optimizing build and deployment times.

note

Cache is always shared across all the jobs within the same GitLab Integration, so you don't need to worry about runner concurrency.

To address extensive data storage needs, you can use the sharedPersistentMountPoints setting in the runner configurations. This feature allows for the creation of directories on the Flexible Persistent File Storage, which are accessible across all containers within all jobs of a specific runner.

tip

The same persistent directories can be shared among multiple runners within the same Integration, simply by specifying the same mount paths for each runner.

Let's assume, you created two runners inside the same Integration with the following sharedPersistentMountPoints setting:

spec:
pipelines:
sharedPersistentMountPoints: ["/my/data", "/my-other/data"]
spec:
pipelines:
sharedPersistentMountPoints: ["/my/data"]

In this case:

  • /my/data directory will be available across all jobs of both runners.
  • /my-other/data directory will be available only across the jobs of the first runner.