Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rewrite service.sls to be able to use it with a cluster manager like Pacemaker #48

Open
hoonetorg opened this issue Apr 7, 2016 · 8 comments

Comments

@hoonetorg
Copy link
Contributor

See this commit and comment:
hoonetorg@cc242d7

Very short the current situation:
enable: True -> haproxy starts at boot and runs
enable: False -> haproxy doesn't start at boot and will be stopped at salt run if started.

Clustermanager needs:

  • service disabled at boot and
  • control by itself if started or stopped
@hoonetorg
Copy link
Contributor Author

I could create a PR if you want.

@johnkeates
Copy link
Contributor

Looks a bit convoluted to me, and doesn't seem to be taking systemd in to account?

@hoonetorg
Copy link
Contributor Author

service.(running|dead|enabled): also works with systemd.

Regarding convoluted:
using a default.yml

  service:
    name: template
    state: running
    enable: True

and then use variables instead of that long salt['pillar.get']... doesn't look that bad anymore :).

# -*- coding: utf-8 -*-
# vim: ft=sls

{% from "template/map.jinja" import template with context %}

template_service__service:
  service.{{ template.service.state }}:
    - name: {{ template.service.name }}
{% if template.service.state in [ 'running', 'dead' ] %}
    - enable: {{ template.service.enable }}
{% endif %}

What do you think?

I could first assign all service related stuff to variables and then it would look similar.

@hoonetorg
Copy link
Contributor Author

Also service.dead doesn't support reload option. I would fix that before creating a PR.

@johnkeates
Copy link
Contributor

It's the whole file.replace deal I was targeting. Anyway, best way would be:

  • Create a default that starts + enables
  • Lookup service:autostart(bool) and service:run (bool)

Then service can be used to always reload, set autostart to the value of service:autostart and start or kill the service depending on service:run.

This way, the terminology is somewhat more explicit, which is important since we are deviating from the normal process (which is: let the OS decide and start+enable by default).

I think I'll patch in map.jina and defaults.yaml so this can be overridden in a more standardised way using the lookup sub-pillar.

@hoonetorg
Copy link
Contributor Author

these are the 2 rendered sls snippets which would make 99% of people happy.

non-cluster (that is already working):

haproxy.service:
  service.running:
    - name: haproxy

    - enable: True
    - reload: True

    - require:
      - pkg: haproxy

cluster(not possible yet):

haproxy.service:
  service.disabled:
    - name: haproxy

    - require:
      - pkg: haproxy

The changes to the file.replace /etc/default/haproxy part were only done to make the regex more failsafe (If somebody puts blanks/tabs before/after ENABLED

@hoonetorg
Copy link
Contributor Author

@johnkeates: Will you prepare something on that?

Remark:
I can imagine what you want to achieve, but I'd not recommend to rename enable to autostart (because it would destroy backward compatibility):
pillar:

haproxy:
  enable: (which defaults to True, and only means: do start on boot/autostart)
  running: (which defaults also to True and in case of a change in haproxy.cfg triggers a reload).

@johnkeates
Copy link
Contributor

Good point, I guess it stays ;-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant