Ever had the problem of not having a one stop shop to track all your logging. We did and we decided to go with Sentry. Since we needed to access many guides to set it up completly, heres a comprehensive guide for you to get Sentry running smoothly.
Sentry bills itself as “the modern error logging and aggregation platform,” and boy does it deliver.
SERVER SETUP
1. pip install -U sentry
LOCAL SETUP
Install Raven pip install raven --upgrade Add following to INSTALLED APPS: INSTALLED_APPS = ( 'raven.contrib.django.raven_compat', ) Grab DSN from Setup Instructions in Sentry server http://ping.ainfo.io:9000/AI/nyu-iconnect/settings/install/python-django/ Apply DSN to RAVEN_CONFIG in Settings import raven
RAVEN_CONFIG = { 'dsn': 'http://5926db97f760460ba5caf54307c2e525:a50875fdd0cf42c284c8b1044085cb9f@ping.ainfo.io:9000/2', } Two Ways to Test if Sentry Connection is Successful python manage.py raven test python manage.py shell from raven.contrib.django.raven_compat.models import client Try: a=1/0 except: client.captureException() client.captureMessage('zero division error TEST') Above Shell script should return something similar in Terminal: DEBUG 2016-04-06 15:34:49,515 base 97020 140735130346240 Configuring Raven for host: DEBUG 2016-04-06 15:34:50,496 base 97020 140735130346240 Sending message of length 972 to http://ping.ainfo.io:9000/api/2/store/ DEBUG 2016-04-06 15:34:50,837 base 97020 140735130346240 Sending message of length 5906 to http://ping.ainfo.io:9000/api/2/store/ Implement Logging in Project Settings- import raven import logging SENTRY_AUTO_LOG_STACKS = True LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'verbose': { 'format': '%(levelname)s %(asctime)s %(module)s ' '%(process)d %(thread)d %(message)s' }, }, 'handlers': { 'mail_admins': { 'level': 'ERROR', 'class': 'django.utils.log.AdminEmailHandler' }, 'sentry': { 'level': 'ERROR', 'class': 'raven.contrib.django.handlers.SentryHandler', }, 'console': { 'level': 'DEBUG', 'class': 'logging.StreamHandler', 'formatter': 'verbose', # 'filename': os.path.join(PROJECT_BASE_PATH, 'logs/django.log'), } }, 'loggers': { 'django.request': { 'handlers': ['mail_admins'], 'level': 'ERROR', 'propagate': True, }, 'django.db.backends': { 'level': 'ERROR', 'handlers': ['console'], 'propagate': False, }, 'raven': { 'level': 'DEBUG', 'handlers': ['console', 'sentry'], 'propagate': False, }, 'sentry.errors': { 'level': 'DEBUG', 'handlers': ['console'], 'propagate': False, }, 'celery': { 'level': 'WARNING', 'handlers': ['sentry'], 'propagate': False, }, } } Capturing Exceptions Example import logging from raven.contrib.django.raven_compat.models import client logger = logging.getLogger('raven') In except: client.captureException() client.captureMessage() logger.error(, exc_info=True)
Miscellaneous
Catch 404s
Apply following in MIDDLEWARE_CLASSES-
'raven.contrib.django.raven_compat.middleware.Sentry404CatchMiddleware',
Additional Logging Configuration
import logging logging.basicConfig( level = logging.DEBUG, format = " %(levelname)s %(name)s: %(message)s", ) logger.error('Irb not set', exc_info=True, extra={ # Optionally pass a request and we'll grab any information we can 'request': request, })
More Documentation https://docs.getsentry.com/hosted/.
Thans for the wonderful article.
When I am trying to run an ansible module from Python API (AWS EC2 instances are my target machines), the private_key_file is read from /etc/ansible/ansible.cfg but I have to pass remote_user in Options for running the module. (I have not set environment variable ANSIBLE_CONFIG)
Here is my python:
…
Options = namedtuple(‘Options’, [‘connection’,’remote_user’, ‘module_path’, ‘forks’, ‘become’, ‘become_method’, ‘become_user’, ‘check’, ‘verbosity’])
options = Options(connection=’ssh’, remote_user=’centos’, module_path=’./library’, forks=100, become=None, become_method=None, become_user=None, check=False, verbosity=None)
…
# create inventory and pass to var manager
inventory = Inventory(loader=loader, variable_manager=variable_manager, host_list=’inventory/ec2.py’)
variable_manager.set_inventory(inventory)
# create play with tasks
play_source = dict(
name = “Ansible Play”,
hosts = ‘all’,
gather_facts = ‘no’,
tasks = [
dict(action=dict(module=’running_services’, args=dict(deploy_dir=”/home/centos/test/”)), register=’running_services’)
]
)
play = Play().load(play_source, variable_manager=variable_manager, loader=loader)
# actually run it
tqm = None
try:
tqm = TaskQueueManager(
inventory=inventory,
variable_manager=variable_manager,
loader=loader,
options=options,
passwords=passwords,
stdout_callback=results_callback #Custom callback
)
result = tqm.run(play)
finally:
if tqm is not None:
tqm.cleanup()
I am failed to understand why it takes key from global ansible.cfg and not remote_host? I do not want to pass remote_user in code and want to use it from /etc/ansible/ansible.cfg
/etc/ansible/ansible.cfg:
…
[defaults]
host_key_checking = False
remote_user = centos
private_key_file = ~/.ssh/key.pem
P.S. Even if I export ANSIBLE_CONFIG=/etc/ansible/ansible.cfg, it doesn’t make any difference.
Any help would be appreciated.