-
Notifications
You must be signed in to change notification settings - Fork 20
Add Foreman development environment #259
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
8d0ad78 to
941076a
Compare
941076a to
86d7cfb
Compare
|
Updates support broker hosts now, for example: I opened a change to Obsah to support the password ask naturally: theforeman/obsah#91 |
2123829 to
bfca664
Compare
|
If anyone wants to follow along and doesn't have a properly set up machine (like me), here's a handy scriptlet to run this within a container to set up a remote machine (remote from the container's point of view, chances are it could be the machine hosting the container) Spawn a fedora 42 container Inside |
bfca664 to
1c67bb0
Compare
1c67bb0 to
548b173
Compare
548b173 to
afb8d33
Compare
7d527e4 to
45d07b5
Compare
ekohl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On a related note, I've looked at using Puma directly to serve HTTPS (https://community.theforeman.org/t/setting-up-ssl-for-running-foreman-on-https/36168/4) and I still think that's an interesting model: keep all services able to function fully standalone, claiming the entire hostname & port combination.
IMHO this works towards the goal of getting rid of a mandatory Apache in front of all services, allowing you to compose it all from independent services.
2e69d19 to
711d283
Compare
52b72ba to
555dd8f
Compare
I was thinking of starting with documenting where we have tested / developers are known to use this t deploy to. I'd love to be as formal as possible by testing combinations, but I also know that with development setups there is a lot more creativity applied. |
Fair question. I asked myself the same of the Foreman I think that's part of the question for me -- do we merge a working version and then edit it or do we edit and narrow this down and then merge. |
f9c03fc to
e6d195e
Compare
How about we write in the development setup readme something like: |
Ideally, the templates here and in prod are identical (if they can be). I am fine merging as is and converging later, but we need to do that, as otherwise we'll be in "dev and prod diverge too much" land quickly again. |
e6d195e to
fd2e64d
Compare
fd2e64d to
05dcb40
Compare
|
I tested locally (with vagrant) with the following patch and things work as expected (I enabled foreman_ansible etc) I did not test remote deployments with broker etc, but I think it's fine enough for now to go in and be polished later. |
d2a4f55 to
0c93184
Compare
evgeni
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
two nitpicks, but otherwise LGTM
0c93184 to
146963e
Compare
Signed-off-by: Eric D. Helms <[email protected]>
146963e to
f447023
Compare
This adds a way to deploy a development environment similar to how the devel boxes in Forklift work. This deploys the backend services as containers to match production, and clones the source code directly to the VM and installs and configures.
In my testing, timing with this to get to a working development setup:
Forklift: 29 minutes
foremanctl: 16 minutes
I think this is a good starting baseline, as it provides a way to develop using our container-based installation that is similar to how developers work today. I'll be exploring other development setups that can be follow-ons to this method. The important bit for me is getting an environment that works within the context of this repository and our containers.
I did try running a development container and mounting the code into the development container. Within the VM environment this proved to be incredibly slow, likely due to the amount of files being shared and the I/O overhead.