-
|
Hello again! What is the best way of saving the scraped data into one of my AWS S3 buckets? Or is it more complicated than modifying one of the config/settings files? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
|
Hi @srdjov18, If you've modified the config files to add some new competitions, for example, you can create a PR with your changes. Once the updates to the config are merged, they will run weekly as part of the data pipeline. Not sure if this answer your question. Let me know otherwise. |
Beta Was this translation helpful? Give feedback.
-
|
You basically have two options
In general, I'd recommend option 2, because then you can benefit from the automation that updates the data weekly and simply consume the updated data from data.world, Kaggle, or by running a With the first option the updated data will only live in your local machine. Of course, you can copy the somewhere else like an S3 bucket of your own or GDrive, or simply using the data from your local. It's up to you.
I'm happy to answer all questions, no worries. |
Beta Was this translation helpful? Give feedback.
You basically have two options
1_acquire.pywill update raw data in data/raw in your local2_prepare.pywill re-create prepared csv in data/prepIn general, I'd recommend option 2, because then you can benefit from the automation that updates the data weekly and simply consume the updated data from data.wo…