This project uses the scripts found in openshift-developer-tools to setup and maintain OpenShift environments (both local and hosted). Refer to the OpenShift Scripts documentation for details.
These instructions assume:
./openshift directory for the project.Good to have:
For the commands mentioned in these instructions, you can use the -h parameter for usage help and options information.
When working with openshift, commands are typically issued against the server-project pair to which you are currently connected. Therefore, when you are working with multiple servers (local, and remote for instance) you should always be aware of your current context so you don't inadvertently issue a command against the wrong server and project. Although you can login to more than one server at a time it's always a good idea to completely logout of one server before working on another.
The automation tools provided by openshift-developer-tools hide some of these details from you, in that they perform project context switching automatically. However, what they don't do is provide server context switching. They assume you are aware of your server context and you have logged into the correct server.
Some useful commands to help you determine your current context:
oc whoami -c - Lists your current server and user context.oc project - Lists your current project context.oc project [NAME] - Switch to a different project context.oc projects - Lists the projects available to you on the current server.TODO: Add this process to the build configurations...
docker build -t s2i-nginx git://github.com/BCDevOps/s2i-nginx
docker tag s2i-nginx docker-registry.pathfinder.gov.bc.ca/jag-csb-edivorce-tools/s2i-nginx
docker login docker-registry.pathfinder.gov.bc.ca -u <username> -p <token>
docker push docker-registry.pathfinder.gov.bc.ca/jag-csb-edivorce-tools/s2i-nginx
(your docker token is the same as your OpenShift login token)
cd /<PathToWorkingCopy>/openshift
initOSProjects.sh
This will initialize the projects with permissions that allow images from one project (tools) to be deployed into another project (dev, test, prod). For production environments will also ensure the persistent storage services exist.
genBuilds.sh
This will generate and deploy the build configurations into the tools project. Follow the instructions written to the command line.
If the project contains any Jenkins pipelines a Jenkins instance will be deployed into the tools project automatically once the first pipeline is deployed by the scripts. OpenShift will automatically wire the Jenkins pipelines to Jenkins projects within Jenkins.
Use -h to get advanced usage information. Use the -l option to apply any local settings you have configured; when working with a local cluster you should always use the -l option.
If you are adding build and image configurations you can re-run this script. You will encounter errors for any of the resources that already exist, but you can safely ignore these errors and allow the script to continue.
If you are updating build and image configurations use the -u option.
If you are adding and updating build and image configurations, run the script without the -u option first to create the new resources and then again with the -u option to update the existing configurations.
genDepls.sh -e <EnvironmentName, one of [dev|test|prod]>
This will generate and deploy the deployment configurations into the selected project; dev, test, or prod. Follow the instructions written to the command line.
Use -h to get advanced usage information. Use the -l option to apply any local settings you have configured; when working with a local cluster you should always use the -l option.
PROXY_NETWORK
While running genDepls.sh you will be prompted for the network address of the upstream proxy. This is used to ensure that requests come from the Justice Proxy only. You will need to enter the address in IPV4 CIDR notation e.g. 10.10.15.10/16. The actual value you need to enter cannot be stored on Github because this would violate BC Government Github policies. The PROXY_NETWORK setting is currently the same for all 3 environments (dev, test, and prod)
SITEMINDER_WHITE_LIST
While running genDepls.sh you will be prompted for a list of IP addresses that make up the white-list of hosts allowed to access the service.
The list must be provided as a space delimited list of IP addresses.
The actual values cannot be stored on Github because this would violate BC Government Github policies. The addresses are different for each environment (dev, test, and prod).
BASICAUTH_ENABLED
Turns on simple basic authentication for test and dev environments. This setting is set to "True" in the dev and test environments only.
BASICAUTH_USERNAME / BASICAUTH_PASSWORD
Both the Username and Password will be randomly generated and can later be found by a project administrator in the Secrets section of the related OpenShift project.
If you are adding deployment configurations you can re-run this script. You will encounter errors for any of the resources that already exist, but you can safely ignore these errors and allow the script to continue.
If you are updating deployment configurations use the -u option.
If you are adding and updating deployment configurations, run the script without the -u option first to create the new resources and then again with the -u option to update the existing configurations.
Note;
Some settings on some resources are immutable. To replace these resources you will need to delete and recreate the associated resource(s).
Updating the deployment configurations can affect (overwrite) auto-generated secretes such as the database username and password.
Care must be taken with resources containing credentials or other auto-generated resources. You must ensure such resources are replaced using the same values._
Log into the web console ang go to the :"eDivorce App (tools)" project
Select Builds => Pipelines => build-and-deploy-to-dev => Configuration
Copy the GitHub wookhook URL
Go to the repository settings in Github, and add the webhook url under "Webhooks"
There are three deployment environments set up for different purposes within OpenShift. They are available at the URLs below.
These instructions assume you have 4 EMPTY projects created in OpenShift:
oc) toolsoc login https://console.pathfinder.gov.bc.ca:8443 --token=xtyz123xtyz123xtyz123xtyz123oc commands. oc -h provides a summary of available commands.If you need to access the DB, you can either use the terminal window in the OpenShift console or the oc rsh
command to get to the command line on the postgresql pod.
The pod identifiers change with every deployment, you need to find the current one
oc get pods | grep Running
oc rsh postgresql-2-qp0oh
psql -d default
\dt
select count(*) from core_bceiduser;
\q
exit
By default your Django application is served with gunicorn and configured to output its access log to stderr. You can look at the combined stdout and stderr of a given pod with this command:
oc get pods # list all pods in your project
oc logs <pod-name>
This can be useful to observe the correct functioning of your application.
If you are getting an "Internal Server Error" message on the test or prod environments, follow the steps below to enter debug mode.
oc rsh command to get to the command line on the postgresql pod.vi edivorce/settings/openshift.py
at the very bottom of the file add the line:
DEBUG = True
In order to load the new configuration you need to restart gunicorn. But we can't restart gunicorn in the normal way because we don't have sudo access inside the openshift pod.
type the command
ps - x
You'll get a list of processes, and you need to find the correct PIDs
PID TTY STAT TIME COMMAND
1 ? Ss 0:00 /opt/app-root/bin/python3 /opt/app-root/bin/gunicorn wsgi --bind=0.0.0.0:8080 --access-logfile=- --config gunicorn_config.py
38 ? S 0:02 /opt/app-root/bin/python3 /opt/app-root/bin/gunicorn wsgi --bind=0.0.0.0:8080 --access-logfile=- --config gunicorn_config.py
39 ? S 0:02 /opt/app-root/bin/python3 /opt/app-root/bin/gunicorn wsgi --bind=0.0.0.0:8080 --access-logfile=- --config gunicorn_config.py
40 ? S 0:02 /opt/app-root/bin/python3 /opt/app-root/bin/gunicorn wsgi --bind=0.0.0.0:8080 --access-logfile=- --config gunicorn_config.py
41 ? S 0:02 /opt/app-root/bin/python3 /opt/app-root/bin/gunicorn wsgi --bind=0.0.0.0:8080 --access-logfile=- --config gunicorn_config.py
50 ? Ss 0:00 /bin/sh
Kill all the gunicorn processes EXCEPT #1. #1 is the master process and it will restart the others for us
kill 38 39 40 41
Wait a 30 seconds type ps -x. Ensuring that new PIDs have been created.
Now you can see the yellow Django debug screen!!!