In a general folder, I have this subset of subfolders which I would like to open iteratively to download datasets in .xlsx format
In the following major folder:
there are these subfolders that as you could see has a specific pattern name
and within each of them, there is a .xlsx dataset
named as similarly as the main subfolder where it is contained
I was wondering on extract them using some iterative function. Based on the code I have found through the forum I have found something that I have readapted as for loop, but with no results.
url = 'urlnamexxx'
for (folder in url) {
temp <- tempfile(fileext = ".xlsx")
download.file(url, temp)
readxl::read_xlsx(temp)
}
Could you please give some suggestions?
Please in case something is not clear, just comment down below and let me the details I should provide.



The script you provided is intended to download a single Excel file from a given URL, save it as a temporary file, and then read it into R using the
readxl::read_xlsx()function. This will not work for multiple URLs or for URLs that point to directories rather than files.To extract data from Google Drive in a structured manner, you would need to use Google Drive's API, which provides a way to list and download files. For R, there are several packages that can facilitate this, such as
googledrive.That would involve:
Authenticate with Google Drive:
Identify the parent folder:
List all subfolders:
Loop over the subfolders, identifying and downloading the
.xlsxfiles:This script authenticates with Google Drive, locates the parent folder, lists its subfolders, and then iterates over these subfolders to identify and download the
.xlsxfiles they contain.Do replace
"~/path/to/your/parent/folder"and"~/path/to/save/files"with your actual paths.That script assumes that you have already set up Google Drive API credentials and that the
googledrivepackage has been installed (install.packages("googledrive")).And make sure you have the necessary permissions to access the files and folders on Google Drive.