Sunday, July 23, 2017

Accessing iCloud (and Dropbox) synched data from non-iCloud synched directories

Lately, I've found iCloud to be an amazing tool. Having switched between four computers in the last year (an abnormal year), I found iCloud synchronization provided a painless way to automagically have the same data across multiple computers using the same iCloud account. But what didn't work for me was having to move all those files to /Desktop or /Documents in order for iCloud to sync them. For instance, I prefer keep all development files in my user's root or home directory (ex. /Users/myusername), since the Terminal will launch by default to this folder (this too you can change, but I'd rather not). Fortunately, there's a way around this using symbolic links. Unfortunately, you may still need to move the actual files into iCloud, but you can still work from the User's home directory and have the changes propagated across all computers using iCloud.  You will need to execute the symlink command on each computer after it has finished synching with iCloud. I say "may" need to move the actual files because I didn't have much luck leaving them in the /User/username directory and putting the symlinked file in iCloud. iCloud didn't seem to want to sync the symlinked file.  But there may still be a way around that, like using Dropbox or by altering this technique.


 How to: 

  1. Move the file(s) to an iCloud synched directory (ex. /Users/username/Desktop/desired_directory_or_file) 
  2. Open Terminal.app or your shell of choice. 
  3. Navigate to the directory where you'd like to place the symlinked folder. (ex. /Users/myusername/) 
  4. Run command: `ln -s /path/to/folder symlink_name` (ex. `ln -s /Users/myusername/Desktop/Work/dev dev`) 
  5. Repeat symlink command for each computer you use (depending on the size of the directory contents, you may need to wait for iCloud to finish synching)


Notes on `ln` command and args: 

  • `ln` is a command used to make links between files and folders on your file system as well as external file systems 
  • `-s` flag is required to make a symbolic link 
  • The first argument is the path to the folder/file you wish to use as the original folder/file, where all the files are actually stored. 
  • The second argument is the name of the directory (or symlink) you wish to create. We execute this command from the path where you desire the symlink. In our case, we want the symlink to reside in the Users home (ex. /Users/myusername/).
This also works with Dropbox (and presumably other cloud storage applications), so you're not limited to iCloud synchronization.

Thursday, December 31, 2015

Elixir (Phoenix) App Custom Domain Redirect on Heroku Not Working Properly

TL;DR If your custom domain is [partly] working, but still redirects you to the herokuapp.com URL, you may need to remove force_ssl: [...] and change your scheme: "https" to "http" in your config/prod.exs. That said, if you're serving any sensitive data (i.e. user login, credit cards, etc.), DO NOT FOLLOW THIS :) Use SSL.

The other day I decided to make a personal site with a brief "Who am I" and resume.  Not that I'm looking for a job, but more so because I wanted something to do in Elixir and couldn't think of anything else that I could knock out in a couple hours.  I made the static site using using the Phoenix framework and pushed it to Heroku following Phoenix's awesome documentation.  Easy peasy!

The only old holdup I had was getting my custom domain to work properly.  I set up the CNAME and @ records as described in a well written StackOverflow post, albeit outdated, it did the trick for the most part.  Going to markevans.io redirected to my Heroku app as expected, but instead of the URL being markevans.io, it redirected to the herokuapp.com URL.  Ugh. I began to scour the internets and followed every post I could find about setting up DNS records on NameCheap and Heroku.  Spoiler alert, they all say pretty much the same thing and 99% of them are for Ruby/Rails apps.  The good thing is, as a Ruby developer, I can understand what's being done and how to translate it to my bare-bones Elixir/Phoenix app.

So what was the problem/solution?  The problem was SSL.  Phoenix's Heroku deployment documentation shows you how to deploy a secure app, which is great.  But my site isn't handling any sensitive data, there's no login, nothing to purchase... nada.  So really, I don't need SSL at this point and I don't want to spend any money on SSL certs (if your site handles *any* secure data, don't do what I did.  Set it up to work with SSL).

The solution, similar to a Rails app, is to not force SSL in your app.  As per the Phoenix-Heroku deployment documentation, your prod.exs should look like this:

config/prod.exs

config :myapp, Myapp.Endpoint,
  http: [port: {:system, "PORT"}],
  url: [scheme: "https", host: "bla-bla-bla.herokuapp.com", port: 443],
  force_ssl: [rewrite_on: [:x_forwarded_proto]],
  cache_static_manifest: "priv/static/manifest.json",
  secret_key_base: System.get_env("SECRET_KEY_BASE")


This setup works great if you're serving over SSL, but if you're not and don't plan to, it'll mess up your domain forwarding. Your custom domain will work, but you'll be redirected to the secure URL (Heroku's URL). To fix this, remove force_ssl: [rewrite_on: [:x_forwarded_proto]], and replace url: [scheme: "https",...] with url: [scheme: "http",...]. When you're done, your prod.exs should look like this:

config/prod.exs

config :myapp, Myapp.Endpoint,
  http: [port: {:system, "PORT"}],
  url: [scheme: "http", host: "bla-bla-bla.herokuapp.com", port: 443],
  cache_static_manifest: "priv/static/manifest.json",
  secret_key_base: System.get_env("SECRET_KEY_BASE")


Now git add . && git commit -m "removed force_ssl from production config", then deploy to Heroku. You should be good to go!

Wednesday, February 4, 2015

What is PATH in Linux and MacOS?

You know all those commands you can automagically summon from any directory on the command line?  Like:
  
$ git <args>

$ apt-get <args>

$ irb

You're probably well aware that each of those commands is a program that could also be executed with:
  
$ ./path/to/git <args>

$ ./path/to/apt-get <args>

$ ./path/to/irb

Without PATH you'd have to execute any program not in your current directory by explicitly declaring the program's path... hence the name. Think of it as a postal system for your shell.  You give your shell a message to deliver to a program.  Your shell flips through its Rolodex of paths known to contain programs to find the recipient and execute it with your message.

Run the following to see where your shell is looking for programs:
  
$ echo $PATH

You should see something similar to:
  
/Users/username/.rbenv/shims:/Users/username/.rbenv/bin:/usr/local/bin:
/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/git/bin:
/Users/username/.gem/ruby/1.8/bin:/opt/nginx/sbin:/usr/local/sbin

Each path (separated by ':') is where your shell looks for a program to fulfill your ever wish. And in the rare times when your trusty old pal shell can't come through, at least she/he has a sense of humor:
  
$ make love

>> make: *** No rule to make target `love'.  Stop.

This blog is really just a regurgitation of Exercism.io's Understanding PATH so that I retain this awesome knowledge. If you want know how to edit your PATH, visit their article and scroll halfway down.

Tuesday, July 22, 2014

Ahhhhh! It's ALIVE!



I finally deployed my listing manager app to Heroku.  For the longest time it's been sitting on local host, simply because it needed API calls to provide the content to fill the views and the API keys needed were my own.  StubHub's sandbox hasn't been working for longest time and they haven't been responsive to this issue.  So to get around this, I set up a demo account with limited permissions, basically, you can GET from the API, but cannot PUT/POST/DELETE.  The downside is it's still limited to 10 API calls per minute, which is why I'm not going to post the demo account credentials on here.  But if you'd like to take a look, get at me and I'll send you the login credentials.

New Features Added

User Permissions

As I eluded to in the intro, accounts now have permissions.  This was originally done so that I could have a demo account, but I quickly realized having a permissions table belonging to users would be helpful for managing master > sub accounts since many ticket resellers have employees who also need access to certain things (such as updating prices or creating/deleting listings) and not others (such as profit and loss, or account preferences).  I didn't do any research on the correct way to implement permissions, so my design might not be up to snuff with convention.

In short, Account has_one Permission and Permission belongs_to Account.  In the controllers, some methods have before statement that checks current_user.permission for the correct permissions.  If the user is not permitted, he/she is redirected_to the current page with a flash[:notice] letting the user know they do not have permissions to access that method.

User Settings

Some of the features in the app are susceptible to a users' unique preferences.  For instance, events change color depending on how far out the event is.  Some events, such as concerts might start peaking in sales immediately after an on-sale as well as a few weeks prior to the event, so having an event turn red three days out wouldn't be of much use.  Conversely, soccer tends to be pretty dead until one to three days from the event date.  With User Settings, each user (whether master or sub account) can set their own preferences for when events turn red/yellow/green/blue.  In addition to event date coloring, users can select how many days Recent Sales covers.  This is useful because a large brokerage may only want to see the last 24 hours, while a small brokerage finds it more useful to see the last seven to 14 days worth of sales.  And finally, users can set the chart types that show on previous event reports.



Report Charts


While this area is very much in its infancy, I though it would be useful have to charts detailing different sales metrics.  The first (and currently only) metric that has been implemented is sales trajectory.  When viewing a past event, users can see how many sales a specific event had for each month from the first sale to the event date.  The charts were made using the Chart.js framework and were really simple to integrate with Ruby.  The only 'gotcha' I came across was while injecting Ruby in the JavaScript.  Using ERB tags and passing in only the variable doesn't work, the data comes through without the proper quotes around each string.  Here was the issue and the solution:

Problem
<%= @chart_labels %>  => [&quot;January&quot;, &quot;February&quot; ...]

Solution
<%= raw @chart_labels %> => ["January", "February", ...]


More charts are coming in the near future... ideas? :)

There were a host of other small things added, like CSS styling, locking out User CRUD, and finally the process of deploying to Heroku, but all that's less interesting, so I'll omit that for now.

If you'd like to poke around, let me know and I'll give you the creds.





Monday, July 21, 2014

Form_For: Gotcha!

Hey!  Why is it that my form won't post unless I refresh the page?  The submit button simply doesn't respond to clicks. :(  Surprisingly, the answer was simple and it appears that it's not a Ruby on Rails issue, it's actually an HTML issue when using form_for.

If you're running into this issue, it would be my guess that your form is nested inside <table> tags.  That was my issue.  It seems that if you're going to put your form in a table, you *must* wrap the <table> tags with the <%= form_for @user do |f| %> ... <% end %> erb tags.

Here's a before and after of my issue (broken code and fixed code):

BROKEN CODE - Notice the form_for erb tags are nested inside the HTML table tags.


FIXED CODE - Just moved the form_form erb tags outside the HTML table tags.


If you know why this works, I'd love to hear your explanation.

Sunday, May 18, 2014

StubHub Listing Manager +





"Software is Never Done..."


"...You just kinda choose when to stop working on it."  When I heard one of my gSchool instructors, Jeff Dean, say that, it really hit home.  It allowed me to give myself permission to stop working on my last personal project.  When I started the StubHub Listing Manager +, I had one clear goal:  get all my inventory and sale data form StubHub and put it in a format that easy for me to navigate and digest.  But once I'd done that, I had a million other ideas of what I could do, and truthfully, it became a little overwhelming.  I felt as if I'd never finish this project.

The Problem

StubHub is great for the average consumer who has a couple ticket groups they want to sell or buy.  But if you have a large inventory of tickets, navigating your inventory and sales is a cumbersome process because their take on inventory management is: "Let's list everything the user has for sale on a single page, in chronological order."  There isn't any sorting by date or by event.  There's no clear way to find out how many tickets you have a for given event, how close each event is, or where exactly your prices stand in comparison to other tickets on the market.  You can get that information, but it involves you clicking between multiple screens, writing down information, and manually comparing it to your own.  Simple if you have a couple tickets for sale.  Not simple if you have hundreds or even thousands of tickets for sale.

The Interim Solution

Most large sellers will get a third-party point-of-sale system that holds their inventory and allows them to view it in a sortable manner and provide them with detailed information about the current market, their individual ticket groups, and previous sales.  The catch is, these third-party point-of-sale systems cost between $200-$300 per month, charge you an additional fee per sale, require you to have a credit card processor, business phone lines, deal with chargebacks, handle shipping on your own, and take care of your own customer service.  I did that for a few years, but ultimately found that the overhead wasn't justified.  Well over $1000 per moth went into managing my own inventory.  And the headaches of customer service, dealing with credit card processing and shipping just weren't worth it.  Luckily, if you sell only on StubHub, they handle all that for you (sans inventory management) and charge you 15% per transaction, which is a little higher than handling it on your own, but well worth not having to deal with all the aforementioned headaches.

The New Solution

So after a few years of doing all on our own, we decided to shut it all down and only deal with StubHub.  The problem, as mentioned above, is that inventory management is a huge hassle because it has to be done manually.  So with my new programming super power, I decided to create my own makeshift inventory manager that GET's, POST's, PUT's and DELETE's from StubHub's API, essentially putting the burden on them.  In addition, to verify that a sale has been paid (which a point-of-sale would do for you), I integrated my inventory manager with the PayPal API to verify that the sale StubHub shows, has actually been paid out.

After some quick Bootstrap styling, it's as done as I want it to be... even though there's loads of other features I still think about adding.  Above is a brief video of the finished product.

PS

I learned that all API's are not made the same.  StubHub's API is poorly maintained, lacking a lot information, and their developer support is virtually non-existent.  Kind of sad for such a large company, but it taught me that sometimes you have to fiddle around... a lot... to get things to work.  Because that's the only way it's going to happen.  PayPal wasn't much different.  While they have new RESTful API's that are supposed to be easy to work with, all the information I needed had to come from their Classic (ahem.. Outdated) API's.  I ended up using their SDK gem instead of doing it all manually because it was such a huge pain in the rear.

24 Hours of Le Google (GovDev Hackathon)

1:30 AM - After two-thirds had gone to bed or given up

24 Hours of Le Google


I would imagine it's similar for most budding developers.  Their first endurance hackathon is an eye opening experience into the world of professional software development.  It's more a la Formula 1 than nerds (me included) lapping between their computers, pizza slices and empty Red Bull cans, as mainstream media or even my own pre-hackathon imagination would depict.  So what might you expect?

Off to the Races

Enter Galvanize Denver, the venue of 24 Hours of Le Google (GovDev Hackathon), registration booths manned with pre-printed name tags, free Google swag (t-shirts and water bottles), staff in matching orange shirts, Google colored race banners streaming the ceiling, catered breakfast, and in the main room... the horse and pony show.

Teams arrived with matching computer terminals, small tables packed with 27 inch Apple iMac's paired with 27 inch Apple external displays.  The most impressive team having four such stations all packed into a small table.  Many teams were split into 'pit' duties, the UX guy, the backend guy, the frontend guy, the mobile guy, the project manager ('guy' in the general, non-pejorative or exclusionary sense... some of these 'guys' were gals).  The air of competition was apparent. No one was too friendly or too talkative, they were all there to win.

Leading up to 'go time', there was a parade of who's who in state tech and Google big-wigs.  Talks from the CEO of this, the CIO of that, and the COO of the other, all sharing their ideas of why this was an important race and why what we were doing was furthering private citizen/sector involvement in making government more efficient.  They sold me, I was excited to be part of it (and still am).  After the talks came the challenges:  take this data, make it public and understandable to the layman; take these horrible relics of bureaucracy (repetitive paper forms), digitize and streamline them so that they're easy to complete on a mobile device and pass the completed product to all affected departments to streamline coordination; and finally, take this paper-based donation and dissemination program, make it mobile and seamless.  The last two challenges having to do with disaster response and preparedness, the first challenge dealing with government expenditures and transparency.


We Had no Idea

What started as a team of three (@FindingFixes, @ScottSkender, and I), had no idea the complexity of the tasks we'd been presented with.  We chose to take the challenge of making government expenditures transparent.  It seemed straight forward. Take these CSV files with a half-million line items and make it understandable to Jane and Joe Doe.  Create a filterable dashboard with charts, include map integration, and toss in a few Google API's to meet the minimum requirements.  Much to my surprise, our little laptops didn't like CSV files with a half-million lines.  They were too big!  And taking legacy data formats and converting them into usable content (like converting to JSON or importing into a PostgreSQL database) was more difficult than anticipated.  After 3.5 hours of hacking away (and getting nowhere), one of our members decided it was time to call it day.


And Then There Were Two

I'd had had it with this 'big data' challenge, it was more than I had bargained for and was a touch outside of our team's technical savvy.  So my partner (Seth Musulin) and I started on another challenge that seemed to compliment our technical chops.  The inventory management challenge to track donations and dissemination of donations.  It was to be a CRUD app of donation items, victims, and the donation items they'd been given (that's the superficial view of it).  After 3.5 hours of getting nowhere on the first challenge, having created a database table and an app that CRUD's donation items in the first 30 minutes felt like a huge accomplishment!  We felt as if we'd made a massive accomplishment... until we started understanding what else was needed. hahaha.  Now that our app can take in donations, we need to CRUD people, then we need to checkout these donations to people, then allow staff to search for donations by location, item type, donor, etc.  Easy enough, right?  Unfortunately, that wasn't the hard part.  While doing all that took our team of two until 1AM (having started at noon), we now had to make it work seamlessly from a user experience point of view... and... it had to look good.  But it was 1AM and having seen other teams nearly complete with some super-sexy, ultra functional, and utterly seamless apps... we knew there was no way we could get it done it time.  So my partner and packed our bags and called it a day.  Considering two-thirds of the original competitors had already left, I feel like we hung with the best of them... although our app surely didn't.

Lesson Learned

These guys take endurance hackathons seriously!  They plan teams where each person is a well-oiled, integral cog in a finely tuned software producing machine.  They don't play around.  They know their strengths and they avoid spinning their wheels on tasks that don't fit their skill-set.  For future endeavors, I'll be part of larger team with varied disciplines and I'll stick to what I know.  While it was a great opportunity to learn some new technologies, like Apache Solr (for fulltext and location-based search), and JavaScript location services, it was certainly not the venue to 'learn' if your intent was to finish and possibly even win.  That said, I'll do it again, and again... and again!  It was a great time, but certainly not what I'd expected.