Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

One Python to rule them all #143

Closed
wants to merge 22 commits into from
Closed

One Python to rule them all #143

wants to merge 22 commits into from

Conversation

kornicameister
Copy link
Contributor

Rewrite all the images to leverage newly added monasca/python image.

@kornicameister kornicameister changed the title One Python to rule them all [WIP] One Python to rule them all Aug 7, 2017
@@ -1,4 +1,4 @@
FROM alpine:3.5
FROM monasca/python:2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it would be better to use the full image tag here (with date and timestamps)? That way we'll never need to worry about local caches using an old python-base with a new upstream git. The pr-bot will (soon™) handle updates to the base image automatically.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure thing, will look into that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#142 should enter first to refer to correct tags.

@kornicameister
Copy link
Contributor Author

Replicated the travis run locally and everything has been built ok (It couldn't launch the services, but I had devstack launched in the same time, so ports were already taken), so I guess we need to have #142 reviewed & merged

@kornicameister
Copy link
Contributor Author

Huh, that's odd...according to the build of #142 , the version of monasca/python ought to be 2-20170808-155214 or 3-20170808-155214. However neither of those can be found on the hub.

@timothyb89 shouldn't updated image (with newer TS) be uploaded to the hub ?

This was referenced Aug 9, 2017
@timothyb89
Copy link
Member

Ah, yeah, looks like I forgot to tag the merge commit to have it push the new image. It built (as it always will) but didn't 'release' the new image. I'll make sure it pushes in #148

FWIW I have some improvements to this workflow in my backlog... soon, I hope.

@kornicameister kornicameister changed the title [WIP] One Python to rule them all One Python to rule them all Aug 10, 2017
pip install python-monascaclient && \
rm -rf /root/.cache/pip && \
RUN apk add --no-cache py2-netaddr py2-yaml py2-jinja2 && \
apk add --no-cache --virtual build-dep && \
apk del build-dep
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, this seems a bit odd. If build-dep is empty we can probably skip adding/deleting it entirely.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@kornicameister
Copy link
Contributor Author

@timothyb89 @mhoppal friendly ping ;)

@timothyb89
Copy link
Member

timothyb89 commented Aug 21, 2017

Hmm, so a few issues I ran into:

  • monasca-persister's master branch is failing: ConfigFileValueError: Value for option ip_address is not valid: influxdb is not IPv4 or IPv6 address - not sure it's actually related to this patch, though (maybe some upstream change)?
    ** if it's an upstream change we don't necessarily need to fix it in this PR
  • both monasca-agent-collector and monasca-agent-forwarder are failing because they try to launch /usr/bin/monasca-forwarder and /usr/bin/monasca-collector, but the scripts are actually installed in /usr/local/bin/

Other than these issues I think it's good to go!

@kornicameister
Copy link
Contributor Author

I think monasca-persister issue is related to recently added oslo-config-generator. That change featured also changing some of the properties types to to the more proper one, let's put it this way. It basically means that option that represented ip address or port number were declared as, respectively, string and integer where it should actually be oslo_cfg.opt.IPAdress and oslo_cfg.ipt.Port.

About that monasca-agent. That is a bit surprising as I didn't actually change anything about the bit of installation ;/. At least I thought so :D. I wil look into it.

@kornicameister
Copy link
Contributor Author

kornicameister commented Aug 22, 2017

@timothyb89 I am not really sure about the agent's paths. I just launched the ci.py locally replicating the travis and for me it goes like:

$ docker run -it monasca/agent-collector:ci-cd sh -c "echo \"collector at: \" \$(command -v monasca-collector) ; echo \"forwarder at:\" \$(command -v monasca-forwarder)"
collector at:  /usr/bin/monasca-collector
forwarder at: /usr/bin/monasca-forwarder

can you 2-check that ?

I will revert the commit in the meantime, because I think that is what messes up the CI.

@kornicameister
Copy link
Contributor Author

@timothyb89 can you show me some example of monasca-persister failing ? Seems ok enough, it starts properly in CI, otherwise no metrics wouldn't be saved and tempests would fail.

@timothyb89
Copy link
Member

Hmm interesting, I wonder if it's a python 2 vs 3 thing. I'll look into it a bit more.

@kornicameister
Copy link
Contributor Author

kornicameister commented Aug 25, 2017

the reason it failed now is because monasca-api cannot connect to mysql database ? was something changed recently ?

edit: guess that was a random error after all. Running travis once again and error is gone.

aliases:
- :latest
- tag: 1.1.1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

old tag should be removed

@timothyb89
Copy link
Member

Hmm, I'm still seeing the agent failing locally, either I have some outdated images that aren't getting rebuilt or the :ci-cd tagging in travis isn't picking things up in quite the right order (almost certainly the case, but may or may not be the problem right now).

Is the change to the persister going to require any patches upstream?

@timothyb89
Copy link
Member

Alright, I've purged my docker cache and rebuilt everything manually. I'm still seeing the persister and agent containers crash.

The agents fail with:

[tim:~/projects/monasca-docker]↥ master(+8/-9,33)+* ± docker-compose logs agent-collector
Attaching to monascadocker_agent-collector_1
agent-collector_1       | + PLUGIN_TEMPLATES=/templates
agent-collector_1       | + USER_PLUGINS=/plugins.d
agent-collector_1       | + AGENT_CONF=/etc/monasca/agent
agent-collector_1       | + AGENT_PLUGINS=/etc/monasca/agent/conf.d
agent-collector_1       | + mkdir -p /etc/monasca/agent/conf.d
agent-collector_1       | + [ false = true ]
agent-collector_1       | + [  = true ]
agent-collector_1       | + [ true = true ]
agent-collector_1       | + template /templates/cadvisor_host.yaml.j2 /etc/monasca/agent/conf.d/cadvisor_host.yaml
agent-collector_1       | + [ true = true ]
agent-collector_1       | + python /template.py /templates/cadvisor_host.yaml.j2 /etc/monasca/agent/conf.d/cadvisor_host.yaml
agent-collector_1       | + [  = true ]
agent-collector_1       | + [  = true ]
agent-collector_1       | + [  = true ]
agent-collector_1       | + [ -e /plugins.d/*.yaml.j2 ]
agent-collector_1       | + [ -e /plugins.d/*.yaml ]
agent-collector_1       | + template /etc/monasca/agent/agent.yaml.j2 /etc/monasca/agent/agent.yaml
agent-collector_1       | + [ true = true ]
agent-collector_1       | + python /template.py /etc/monasca/agent/agent.yaml.j2 /etc/monasca/agent/agent.yaml
agent-collector_1       | + /usr/bin/monasca-collector foreground
agent-collector_1       | /start.sh: line 1: /usr/bin/monasca-collector: not found

The persister fails with:

[tim:~/projects/monasca-docker]↥ master(+8/-9,33)+* ± docker-compose logs monasca-persister
Attaching to monascadocker_monasca-persister_1
monasca-persister_1     | + KAFKA_WAIT_RETRIES=24
monasca-persister_1     | + KAFKA_WAIT_DELAY=5
monasca-persister_1     | + [ -n alarm-state-transitions,metrics ]
monasca-persister_1     | + echo Waiting for Kafka topics to become available...
monasca-persister_1     | + success=false
monasca-persister_1     | Waiting for Kafka topics to become available...
monasca-persister_1     | + seq 24
monasca-persister_1     | + python /kafka_wait_for_topics.py
monasca-persister_1     | Checking for available topics: ['alarm-state-transitions', 'metrics']
monasca-persister_1     | Topic not found: alarm-state-transitions
monasca-persister_1     | No handlers could be found for logger "pykafka.cluster"
monasca-persister_1     | + [ 1 -eq 0 ]
monasca-persister_1     | + echo Kafka not yet ready (attempt 1 of 24)
monasca-persister_1     | + sleep 5
monasca-persister_1     | Kafka not yet ready (attempt 1 of 24)
monasca-persister_1     | + python /kafka_wait_for_topics.py
monasca-persister_1     | Checking for available topics: ['alarm-state-transitions', 'metrics']
monasca-persister_1     | Topic is ready: alarm-state-transitions
monasca-persister_1     | Topic is ready: metrics
monasca-persister_1     | + [ 0 -eq 0 ]
monasca-persister_1     | + success=true
monasca-persister_1     | + break
monasca-persister_1     | + [ true != true ]
monasca-persister_1     | + [ true = true ]
monasca-persister_1     | + python template.py /etc/monasca-persister/persister.conf.j2 /etc/monasca-persister/persister.conf
monasca-persister_1     | + monasca-persister --config-file /etc/monasca-persister/persister.conf
monasca-persister_1     | 2017-08-30 21:12:00.805 26 INFO monasca_persister.persister [-] 
monasca-persister_1     | 
monasca-persister_1     |                _____
monasca-persister_1     |               /     \   ____   ____ _____    ______ ____ _____
monasca-persister_1     |              /  \ /  \ /  _ \ /    \\__  \  /  ___// ___\\__  \
monasca-persister_1     |             /    Y    (  <_> )   |  \/ __ \_\___ \  \___ / __  \_
monasca-persister_1     |             \____|__  /\____/|___|  (____  /____  >\___  >____  /
monasca-persister_1     |                     \/            \/     \/     \/     \/     \/
monasca-persister_1     |             __________                    .__          __
monasca-persister_1     |             \______   \ ___________  _____|__| _______/  |_  ___________
monasca-persister_1     |              |     ___// __ \_  __ \/  ___/  |/  ___/\   __\/ __ \_  __ \
monasca-persister_1     |              |    |   \  ___/|  | \/\___ \|  |\___ \  |  | \  ___/|  | \/
monasca-persister_1     |              |____|    \___  >__|  /____  >__/____  > |__|  \___  >__|
monasca-persister_1     |                            \/           \/        \/            \/
monasca-persister_1     | 
monasca-persister_1     |         
monasca-persister_1     | 2017-08-30 21:12:00.812 35 INFO monasca_persister.persister [-] start process: <class 'monasca_persister.repositories.influxdb.metrics_repository.MetricInfluxdbRepository'>
monasca-persister_1     | 2017-08-30 21:12:00.813 36 INFO monasca_persister.persister [-] start process: <class 'monasca_persister.repositories.influxdb.alarm_state_history_repository.AlarmStateHistInfluxdbRepository'>
monasca-persister_1     | Process Process-2:
monasca-persister_1     | Traceback (most recent call last):
monasca-persister_1     |   File "/usr/local/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
monasca-persister_1     |     self.run()
monasca-persister_1     |   File "/usr/local/lib/python2.7/multiprocessing/process.py", line 114, in run
monasca-persister_1     |     self._target(*self._args, **self._kwargs)
monasca-persister_1     |   File "/usr/local/lib/python2.7/site-packages/monasca_persister/persister.py", line 94, in start_process
monasca-persister_1     |     respository)
monasca-persister_1     |   File "/usr/local/lib/python2.7/site-packages/monasca_persister/repositories/persister.py", line 44, in __init__
monasca-persister_1     |     self.repository = repository()
monasca-persister_1     |   File "/usr/local/lib/python2.7/site-packages/monasca_persister/repositories/influxdb/alarm_state_history_repository.py", line 31, in __init__
monasca-persister_1     |     super(AlarmStateHistInfluxdbRepository, self).__init__()
monasca-persister_1     |   File "/usr/local/lib/python2.7/site-packages/monasca_persister/repositories/influxdb/abstract_repository.py", line 30, in __init__
monasca-persister_1     |     self.conf.influxdb.ip_address,
monasca-persister_1     |   File "/usr/local/lib/python2.7/site-packages/oslo_config/cfg.py", line 3363, in __getattr__
monasca-persister_1     |     return self._conf._get(name, self._group)
monasca-persister_1     |   File "/usr/local/lib/python2.7/site-packages/oslo_config/cfg.py", line 2925, in _get
monasca-persister_1     |     value = self._do_get(name, group, namespace)
monasca-persister_1     |   File "/usr/local/lib/python2.7/site-packages/oslo_config/cfg.py", line 2968, in _do_get
monasca-persister_1     |     % (opt.name, str(ve)))
monasca-persister_1     | ConfigFileValueError: Value for option ip_address is not valid: influxdb is not IPv4 or IPv6 address
monasca-persister_1     | 2017-08-30 21:12:00.950 26 INFO monasca_persister.persister [-] Received signal 17, beginning graceful shutdown.

I believe building via ci.py is always going to generate incorrect results. The CI script isn't smart enough to patch the correct image into the MON_AGENT_BASE_VERSION arg in monasca-agent-collector/build.yml and then rebuild the images in order (monasca/dbuild#2) so the agent containers will always be built on the older monasca-agent-base image. You'll have to build manually in order to reproduce the issue, unfortunately.

@kornicameister
Copy link
Contributor Author

Ok, getting back to work ;-). Holiday's over ;-(

@kornicameister
Copy link
Contributor Author

Ahhh...I think I need to split it over into several PRs because of the recently introduced limit in the travis...totally forgot about that. There's a lot more images changes than just 5.

@kornicameister
Copy link
Contributor Author

Let's close it down and split the PR into PR per image manner.

@timothyb89
Copy link
Member

Yup, thanks for splitting it up!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants