What I'm doing here is passing some JSON through the 'rest.framework' . The problem is that when using 'request.json' I'm seeing the following being returned.
What I need to do is extract the JSON out . To do this , the following works.
What I'm doing here is passing some JSON through the 'rest.framework' . The problem is that when using 'request.json' I'm seeing the following being returned.
What I need to do is extract the JSON out . To do this , the following works.
Just a quick note , if you've searched and found this blog then I expect you already know deep down that the M1 chip is your issue. There is a solution though. Let me explain.
So, what we've been doing is using Lambda Layers to install Python packages on our Lambda installation. However, when putting the same code on my Mac ( M1 Chip ) it doesn't work !
https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html
It does work if we take the zip file from the AMD64 chipped Mac , and put that on my Mac and proceed.
That is the solution we're having to use at the moment , the zip needs to be compiled on the AMD64 chip , doing it on the M1 chip won't work.
Here's the error code I was getting.
The recommended solution across the web seemed to be to run the following.
pipenv install --upgrade setuptools
However, this didn't work for me. I needed to use the '--pre' tag.
pipenv install --pre email
This now works, the version I had the issue with was 'email-4.0.2'
So, if you have a 'email-4.0.2 won't pipenv install' issue then I hope this post helps.
After running through this blog on how to create a Wordpress site on Docker. ( Which works excellently ) https://davidyeiser.com/tutorials/docker-wordpress-theme-setup
After doing so, I couldn't quite work out what details I needed to use in Sequel Pro / Sequel Ace.
So my entire file looks like this.
And then I can use
'npm run build' 'dist' folder not working.
I'd been hitting a bit of a brick wall with this task, so thought it'd be a good idea to commit it to the blog. Incase someone else finds it an can save themselves a heap of time. The task I was trying to achieve was
'How to host a Vue.js project on Firebase' .
And I was following this blog and youtube video for guidance.
How to deploy vue.js applications with firebase hosting
And Net Ninja's
But on a site ( that was already built and working locally ) I could only see the Firebase hosting holding message .
'Firebase Hosting Setup Complete'
I could however could create a new Vue.js website ( boiler plate ) and work from that, but building up from scratch again would be excruciating.
Let's take a look at process I was following and where it was getting stuck.
$. firebase init
Scroll down to the first 'Hosting config' choice and click on 'spacebar' and then 'return'.
There is already a public folder in use, so we'll want to set the public directory as 'dist'
As we're using Vue.js which is a javascript app that rewrites all the pages through the index page, then we need to select 'y' in response to , rewrite all urls to /index.html .
This creates a dummy index page.
We don’t want to deploy this , we now want to build our vue application so that all the files go inside this ‘dist’ folder and then we want to deploy it.
1 npm run build
And we should now be able to view our site with.
1firebase serve
And this gives us the same ‘Firebase Hosting’ message we’re seeing before
On my basic vue.js install though, I could see that it was working fine. One thing that caught my eye was that the 'dist' folder was still empty. Hold on ! No, the issue was another 'dist' folder had been created and I had one in the 'root' of the project and one in the 'app' folder.
I removed both the ‘dist’ folders and started again.
Ran ‘fiebase init’ and chose the hosting as before.
The ‘dist’ folder was created in Root . But I manually moved it to the ‘app’ folder
I then ran ‘npm run build’ . this created the new files.
In ‘firebase.json’ I had to add the following code
1 "hosting": {
2 "public": "app/dist",
Ran ‘firebase serve’.
So I have the site showing using ‘firebase serve’ .
firebase deploy --only hosting
And I can now see it on the app online :) I hope this saves someone a heap of time, as it took me a while to work this one out :/
The task was simple, I'd added a load of menu items to the top menu using snippets but I wanted to change the order
What no drag and drop ! And neither could I see an input box to enter a position. Surely I wouldn't have to get my hands dirty and write some code for this.
Here's an easy fix though.
As the navigation uses flexbox . All you need to do is call the Bootstrap Order class.
And then in your 'navigation' item 'snippet' just add the ordering like this
Although I couldn't find a 'gsutil' command to empty a bucket as such the following will remove and then we add the bucket.
I've also add the code here for how to do this from an Airflow DAG .
% gsutil -m rm -r gs://djem_photos
% gsutil -m mkdir gs://djem_photos
Here's a straight forward piece of code, but I found this solution difficult to find and uncover . So here's the solution to getting the data from the Airflow connection in to my DAG - in Google Cloud Platform
Hope that helps someone out there !
Although this post is about solving my specific issue with locating the 'googleads.yaml' and it's 'private_key' file , it's will also be useful to others who want to Access any other file that they have uploaded.
I had the following code that I could not get to work.
FOLDER_PATH = '~/dags/'
2
3def gam_traffic():
4 client = ad_manager.AdManagerClient.LoadFromStorage(FOLDER_PATH + 'googleads.yaml')
I'd tried other FOLDER names , like the gs:// bucket address and the full address like
RED HERRING ALERT
Using the gs:// bucket address worked fine for Saving and Retrieving .csv files using Python Panda, but I couldn’t get the address to work when using the Python Open command. My assumption now is that the Python Panda library must contain some ‘magic’ that process this address when it see that the URI starts with ‘gs://’
To solve my issue and find the path I needed then I ran the following tasks to prove what folder we are in , and all the files and folders in it.
import os
2
3# Print working dir
4def print_working_dir():
5 directory = os.getcwd()
6 # Iterating through the json
7 logging.info(directory)
8 return "success"
1import glob
2
3# Print working dir
4def list_working_dir():
5 for filename in glob.iglob("./**/*", recursive=True):
6 logging.info(filename)
7
The second task takes ages to run, as its iterating through loads of folders. In our case we intercepted the logs while it was running , as we could already see the information that we needed.
In our case the filepath we needed was
1./gcsfuse/dags/googleads.yaml'