Spring Data Commons. Interfaces and code shared between the various datastore specific implementations.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
Greg Turnquist 310a26f171
DATACMNS-1536 - Polishing.
7 years ago
..
README.adoc DATACMNS-1536 - Polishing. 7 years ago
build.sh DATACMNS-1536 - Introduce Jenkins. 7 years ago
test.sh DATACMNS-1536 - Introduce Jenkins. 7 years ago

README.adoc

== Running CI tasks locally

Since this pipeline is purely Docker-based, it's easy to:

* Debug what went wrong on your local machine.
* Test out a a tweak to your `test.sh` script before sending it out.
* Experiment against a new image before submitting your pull request.

All of these use cases are great reasons to essentially run what the CI server does on your local machine.

IMPORTANT: To do this you must have Docker installed on your machine.

1. `docker run -it --mount type=bind,source="$(pwd)",target=/spring-data-commons-github adoptopenjdk/openjdk8:latest /bin/bash`
+
This will launch the Docker image and mount your source code at `spring-data-commons-github`.
+
2. `cd spring-data-commons-github`
+
Next, run the `test.sh` script from inside the container:
+
3. `PROFILE=none ci/test.sh` (or whatever profile you need to test out)

Since the container is binding to your source, you can make edits from your IDE and continue to run build jobs.

If you need to test the `build.sh` script, do this:

1. `docker run -it --mount type=bind,source="$(pwd)",target=/spring-data-commons-github adoptopenjdk/openjdk8:latest /bin/bash`
+
This will launch the Docker image and mount your source code at `spring-data-commons-github`.
+
2. `cd spring-data-commons-github`
+
Next, run the `build.sh` script from inside the container:
+
3. `ci/build.sh`

IMPORTANT: This will attempt to deploy to artifactory, but without credentials, it will fail, leaving you simply with a built artifact.

NOTE: Docker containers can eat up disk space fast! From time to time, run `docker system prune` to clean out old images.