In a past post: http://sferich888.blogspot.com/2017/01/learning-using-jsonpath-with-openshift.html
I discussed what json path could do. Today I learned that filters (are only capable of doing simple comparisons (==, !=, <=, >=) type operations.
The tl;dr version is (see regex):
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/util/jsonpath/parser.go#L338
However because Kubernetes (and by proxy OpenShift) use the jsonpath extension from exponent-io and the golang re2 (regex engine).
The above "filter" operation to find and compare parts of the path are only (as of right now) capable of simple operations.
As a technology enthusiast, software developer and all around geek. These are my feeble attempts to log my personal experiments and projects, contained with in is the knowledge obtained during there trials.
Monday, January 30, 2017
Sunday, January 8, 2017
Fedora and Native Script
I have decided to look into mobile application development. Due to my hardware limitations (I don't own a Mac), I am limited to Android Application development.
However as Java is not my favorite language and I want the ability to possibly port my application over to IOS, I have decided to look into writing an application using NativeScript. This will allow me to learn javascript (better) as well as write an application that has the possibility of being portable to Android and IOS.
To get started, I needed to download and install all of the required dependencies. The NativeScript website has a guide however no instructions for Fedora.
Because I have a newly installed Fedora 25 system, I decided to see if Fedora's DevAssistant could help me get the required dependencies.
However as Java is not my favorite language and I want the ability to possibly port my application over to IOS, I have decided to look into writing an application using NativeScript. This will allow me to learn javascript (better) as well as write an application that has the possibility of being portable to Android and IOS.
To get started, I needed to download and install all of the required dependencies. The NativeScript website has a guide however no instructions for Fedora.
Because I have a newly installed Fedora 25 system, I decided to see if Fedora's DevAssistant could help me get the required dependencies.
- Note: The Fedora Magazine has a good guide for installing Android Developer Studio.
- Installing Android Developer Sudio is a LONG process, as you can dowload up to 50GB+ of material.
- Your going to need Disk Space for the SDK and the Emulators.
sudo dnf install gcc-c++.x86_64With this in place I am ready to start the development of my project!
sudo dnf install devassistant devassistant-ui
da pkg install android-studio
da crt android-studio --name Test
### Setup bashrc
#### edit: ./.bashrc and add the following.
export JAVA_HOME=$(alternatives --list | grep java_sdk_openjdk | awk '{print $3}')
export ANDROID_HOME=${HOME}/android-studio/android-sdk-linux
export PATH=$PATH:${ANDROID_HOME}/tools/
export PATH=$PATH:${ANDROID_HOME}/platform-tools/
### End of file.
rm -rf Test ## The project crated by DevAssistant is not needed.
sudo $ANDROID_HOME/tools/android update sdk --filter tools,platform-tools,android-23,build-tools-23.0.3,extra-android-m2repository,extra-google-m2repository,extra-android-support --all --no-ui
sudo npm install -g nativescript --unsafe-perm
tns doctor ### Make sure it does not report any errors
Sunday, October 2, 2016
Using Docker to reposync RHEL content to a systems local storage.
I recently got the opportunity to go on site with a customer, and as such would be on a plane for some period of time (5+hrs). Because of this long section of down time, I wanted to use the time to do some work. However the work I want to focus on requires that I have resources. I plan to work on scripting or automating the deployments of OpenShift with KVM, but due to my flight (travel accommodations) I will not be connected, or don't want to rely on slow / unreliable internet speeds for package downloads.
Because Satellite (a repository manager) is bulking and my laptops resources are limited. I decided that hosting the content with a web server an syncing it would be the fastest and most light way to accomplish my goals.
However, as I run Fedora, getting the repositories (packages) was going to require some ingenuity, because reposync only works if you can attach to the repositories, which requires a RHEL server (with a subscription), this means installing a VM, which takes time.
Luckily we have containers! Which means that instead of installing RHEL, in a VM, and mounting a filesystem into the VM. Or turning this VM into the hosting server for the content. I can simply mount my storage (in my case this was an SC card), as part of a RHEL7 container, and use the container and its tools to do what I need to get the content.
Because of this method (using Docker), I don't need waste time installing a VM to do these operations, or use unnecessary resources to host this content. This leaves me more room for the VM's I plan to run as part of my scripting / testing, and allowed me to get the content faster than I originally expected.
Because Satellite (a repository manager) is bulking and my laptops resources are limited. I decided that hosting the content with a web server an syncing it would be the fastest and most light way to accomplish my goals.
However, as I run Fedora, getting the repositories (packages) was going to require some ingenuity, because reposync only works if you can attach to the repositories, which requires a RHEL server (with a subscription), this means installing a VM, which takes time.
Luckily we have containers! Which means that instead of installing RHEL, in a VM, and mounting a filesystem into the VM. Or turning this VM into the hosting server for the content. I can simply mount my storage (in my case this was an SC card), as part of a RHEL7 container, and use the container and its tools to do what I need to get the content.
$ sudo docker pull registry.access.redhat.com/rhel7From here the steps to get the content may vary, but the basic process goes like:
$ sudo docker run -it --rm -v /var/run/<user>/<drive_name>:/opt rhel7 /bin/bash
# subscription-manager register --username <RHN_USERNAME> --auto-attach
[Password]:
# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
- Note: Extra effort, to enable channels might be needed, depending on how auto-attach does, at selecting your subscriptions.
# for repo in rhel-7-server-rpms rhel-7-server-extras-rpms rhel-7-server-ose-3.3-rpm; do reposync --gpgcheck -l --repoid=${repo} --download_path=/opt/ --downloadcomps --download-metadataNote: I based the repositories (and some of the process) I would need off of - https://docs.openshift.com/container-platform/3.3/install_config/install/disconnected_install.html#disconnected-syncing-repos
# for repo in rhel-7-server-rpms rhel-7-server-extras-rpms rhel-7-server-ose-3.3-rpm; do createrepo -v /opt/${repo} -o /opt/${repo} -g /opt/${repo}/comps.xml; done
Because of this method (using Docker), I don't need waste time installing a VM to do these operations, or use unnecessary resources to host this content. This leaves me more room for the VM's I plan to run as part of my scripting / testing, and allowed me to get the content faster than I originally expected.
Tuesday, September 6, 2016
Improving the user experience of hub
I can't remember things well, So when possible rely tools (like bash completion) to help me complete or explore commands with linux so that I don't have to remember every commands 100+ options.
With hub, the same applies and you may not have bash-completion for this new tool. If you install it directly from the hub site.
To help you solve that, you can do:
With hub, the same applies and you may not have bash-completion for this new tool. If you install it directly from the hub site.
To help you solve that, you can do:
if [[ -f /usr/share/bash-completion/completions/hub ]]; then echo "hub bash-completion already installed"; else sudo curl -o /usr/share/bash-completion/completions/hub https://raw.githubusercontent.com/github/hub/master/etc/hub.bash_completion.sh; fiThis relies on the bash-completion construct but its the simplest way to add completion commands to this new tool.
- This also updates your "completion" commands for git so be aware that if you have not aliased "git > hub" your completion commands may not correctly display your available options.
It should also be noted that if your not installing the hub tooling from the project site directly (but instead using your OS's package manger), bash completion may already come with the software. Be sure to explore this (and reload your shell after installing).
Simplifying your github workflow with "hub"
As a support engineering / software maintenance engineer, I spend a lot of time looking at code, reading code, searching code, etc. So needless to say I have a few git repos's sitting arround.
In most cases, I use these repositories to help me understand errors, or issues that customers see when using a product or library, that I am supporting. However in some situations, the problems once identified are simple to fix, it becomes time for me to change hats and become an contributor.
In most cases contributing its a pain because it involves understanding parts of the SCM (git) tooking that can be difficult to understand (when your first getting started). Depending on the project, contributing, or the workflow in how you provide contributions to the project can be "challenging". Luckily services like github have arisen to make the sharing / hosting of OpenSource projects simple. They have also worked to make "contributing" to projects simple (however these efforts of go with out praise).
One such example is github's invention of the "pull request". While this is a source control concept for "git", however its workflow definition has fundamentally altered, how contributions to projects work because it defined tooling to unify and simplify the process of contributions.
One of the complications with the "pull request" is that it, with out "sevice tooling" (Github) you are more or less providing "patches" that have to be manualy managed. The biggest complication caues by this "sevice tooling" (Github) is that you have to use the "web ui" to create / submitt a PR.
Not any more. With "hub" you can remove this complication, and move back to the terminal.
To get started you will need to install the "hub tooling", in my case on fedora I can just run.
sudo yum install hubI can then use the use hub like 'git'.
hub clone https://github.com/openshift/openshift-docsExcept now, I have new options that integrate directly with the github service.
cd openshift-docs/
hub forkFrom here on out, most of the "contribution process" is likely the same as what you do in for any git project. Branch -> Modify - > Commit - > Push.
git remote -v
git checkout -b typos
< make changes >
git add install_config/install/prerequisites.adoc install_config/install/disconnected_install.adoc
<confirm changes>
git diff master
git status
git commit -m "fixing typos"With you changes now in your "fork" and "feature branch" you can switch back to hub to complete your PR and contribute to the project.
git push <GITHUB_USER> typos
git pull-request
- Note: you need to ensure that you setup your github ssh key, or you set on your git configuration:
git config --global hub.protocol https
Friday, June 17, 2016
Implementing Dynamic DNS in OpenShift v3 (with Python and Free IPA)
For a few days, now I have been trying to think of a way that I could, remove the wild card DNS requirement from OpenShift v3.
In the v2 we had the capability / functionality dynamically update DNS servers, when applications were created. So naturally I went looking for a way to also do this.
It turns out it is possible to monitor the routes API in OpenShift v3 and with the information provided, by the events stream, then update a Dynamic DNS service like IPA.
In the v2 we had the capability / functionality dynamically update DNS servers, when applications were created. So naturally I went looking for a way to also do this.
It turns out it is possible to monitor the routes API in OpenShift v3 and with the information provided, by the events stream, then update a Dynamic DNS service like IPA.
Using Python and "requests" to access the Free IPA API
For a few days, not I have needed a dynamic way to create DNS entries, or Host on my DNS/IDM provider.
So I set out on a quest to see if I could use python and the Free IPA API, to add dynamically created hosts (in say a cloud environment or IaaS platform), and update DNS or Host Records.
In my search I found a good article by Alexander Bokovoy that gave me the information I needed, to get started, and complete my desired goal.
Below is a sample of what I needed.
So I set out on a quest to see if I could use python and the Free IPA API, to add dynamically created hosts (in say a cloud environment or IaaS platform), and update DNS or Host Records.
In my search I found a good article by Alexander Bokovoy that gave me the information I needed, to get started, and complete my desired goal.
Below is a sample of what I needed.
#!/bin/python #import http-parser import requests import json ipaurl="https://idm.example.com/ipa/" session = requests.Session() resp = session.post('{0}session/login_password'.format(ipaurl), params="", data = {'user':'certadmin','password':'redhat'}, verify=False, headers={'Content-Type':'application/x-www-form-urlencoded', 'Accept':'applicaton/json'}) header={'referer': ipaurl, 'Content-Type':'application/json', 'Accept':'application/json'} create_host = session.post('{0}session/json'.format(ipaurl), headers=header, data=json.dumps({'id': 0, 'method': 'host_add', 'params': [[event['object']['spec']['host']], {'force': True, 'ip_address': 192.168.1.101}]}), verify=False) print " Host Create Return Code: {0}".format(create_host.status_code)
This should create a host, entry in your IPA server, and set its IP address. As such allowing you to query the IPA server for DNS and resolve the proper IP of the host name that is created.
Subscribe to:
Posts (Atom)
Its the little things (like opening a web browser)
Sometimes as your developing a solution/script for a problem your faced with interesting challenges where the dumbest workaround (opening a ...
-
Sometimes as your developing a solution/script for a problem your faced with interesting challenges where the dumbest workaround (opening a ...
-
As part of an effort to automate what I do with OpenShift, I had to learn how to take advantage of OpenShift's "watch" QueryPa...
-
In the course of my day job, I often have to create VM's to reproduce issues. With my local system this is quite simple, however how a h...