I have found that I get data from many different sources. These sources range from simple .csv files to more complex relational databases, to structure XML or JSON files. I have compiled the different approaches that one can use to easily access these datasets.
Local Column Delimited Files
This is probably the most common and easiest approach to load data into R. It simply requires one line to do everything that is needed to set up the data. Then a couple additional lines to tidy up the dataset.
1.file <-"c:\\my_folder\\my_file.txt"
2.raw_data <- read.csv(file, sep=","); ##'sep'can be a number of options including \t for tab delimited
3.names(raw_data) <- c("VAR1","VAR2","RESPONSE1")
Text File From the Internet
I find this very useful when I need to get datasets from a Web site. This is particularly useful if I need to rerun the script and the Web site continually updates their data. This save me from having to download the dataset into a csv file each time I need to run an update. In this example I use one of my favorite data sources which comes from the National Data Buoy Center. This example pulls data from a buoy (buoy #44025) off the coast of New Jersey. Conveniently you can use the same read.csv() function that you would use if read the file from you own computer. You simply replace the file location with the URL of the data.
1.file <-"<a href="http://www.ndbc.noaa.gov/view_text_file.php?filename=44025h2011.txt.gz&dir=data/historical/stdmet/">http://www.ndbc.noaa.gov/view_text_file.php?filename=44025h2011.txt.gz&;dir=data/historical/stdmet/</a>"
2.3.raw_data <- read.csv(file, header=T, skip=1)
Files From Other Software
Often I will have Excel files, SPSS files, or SAS dataset set to me. Once again I can either export the data as a csv file and then import using the read.csv function. However, taking that approach every time means that there is an additional step. By adding unnecessary steps to a process increases the risk that the data might get corrupted due to human error. Furthermore, if the data is updated from time to time then the data that you downloaded last week may not have the most current data.
SPSS
1.library(foreign)
2.file <-"C:\\my_folder\\my_file.sav"
3.raw <- as.data.frame(read.spss(file))
Microsoft Excel
1.library(XLConnect)
2.3.file <-"C:\\my_folder\\my_file.xlsx"
4.raw_wb <- loadWorkbook(file, create=F)
5.raw <- as.data.frame( readWorksheet(raw_wb, sheet='Sheet1') )
Data From Relational Databases
There is the RMySQL library which is very useful. However, I have generally been in the habit of using the RODBC library. The reason for this is that I will often jump between databases (e.g. Oracle, MSSQL, MySQL). By using the RODBC library I can keep all of my connections in one location and use the same functions regardless of the databases. This example below will work on any standard SQL database. You just need to make sure you set up an ODBC connection call (in this example) MY_DATABASE.
1.library(RODBC)
2.channel <- odbcConnect("MY_DATABASE", uid="username", pwd="password")
3.4.raw <- sqlQuery(channel,"SELECT * FROM Table1");
Data from Non-Relational Databases
R has the capability to pull data from non-relational databases. These include Hadoop (rhbase), Cassandra (RCassandra), MongoDB (rmongodb). I personally have not used RCassandra but here is the documentation. The example here uses MongoDB using an example provided by MongoDB.
01.library(rmongodb)
02.MyMongodb <-"test"
03.ns <-"articles"
04.mongo <- mongo.create(db=MyMmongodb)
05.06.list.d <- mongo.bson.from.list(list(
07."_id"="wes",
08.name=list(first="Wesley", last=""),
09.sex="M",
10.age=40,
11.value=c("7","5","8","2")
12.))
13.mongo.insert(mongo,"test.MyPeople", list.d)
14.list.d2<- mongo.bson.from.list(list(
15."_id"="Article1",
16.when=mongo.timestamp.create(strptime("2012-10-01 01:30:00",
17."%Y-%m-%d %H:%M:%s"), increment=1),
18.author="wes",
19.title="Importing Data Into R from Different Sources",
20.text="Provides R code on how to import data into R from different sources.",
21.tags=c("R","MongoDB","Cassandra","MySQL","Excel","SPSS"),
22.comments=list(
23.list(
24.who="wes",
25.when=mongo.timestamp.create(strptime("2012-10-01 01:35:00",
26."%Y-%m-%d %H:%M:%s"), increment=1),
27.comment="I'm open to comments or suggestions on other data sources to include."
28.)
29.)
30.)
31.)
32.list.d2
33.mongo.insert(mongo,"test.MyArticles", list.d2)
34.35.res <- mongo.find(mongo,"test.MyArticles", query=list(author="wes"), fields=list(title=1L))
36.out <- NULL
37.while (mongo.cursor.next(res)){
38.out <- c(out, list(mongo.bson.to.list(mongo.cursor.value(res))))
39.40.}
41.42.out
Copied and Pasted Text
01.raw_txt <- "
02.STATE READY TOTAL
03.AL3636
04.AK58
05.AZ1516
06.AR2127
07.CA4343
08.CT5668
09.DE2222
10.DC77
11.FL130132
12.GA5354
13.HI1116
14.ID1111
15.IL2424
16.IN6577
17.IA125130
18.KS2226
19.KY3434
20.LA2734
21.ME9496
22.MD2526
23.MA8292
24.Mi119126
25.MN6980
26.MS4343
27.MO7482
28.MT3440
29.NE913
30.NV6464
31.NM120137
32.NY6062
33.NJ2933
34.NH4445
35.ND116135
36.NC2933
37.OH114130
38.OK1922
39.PA101131
40.RI3232
41.Sc3545
42.SD2525
43.TN3034
44.TX1425
45.UT1111
46.VT3349
47.VA108124
48.WV2736
49.WI122125
50.WY1214
51."
52.raw_data <- textConnection(raw_txt)
53.raw <- read.table(raw_data, header=TRUE, comment.char="#", sep="")
54.close.connection(raw_data)
55.56.raw
57.58.###Or the following line can be used
59.60.raw <- read.table(header=TRUE, text=raw_txt)
Structured Local or Remote Data
One feature that I find quite useful is when there is a Web site with a table that I want to analyze. R has the capability to read through the HTML and import the table that you want. This example uses the XML library and pulls down the population by country in the world. Once the data is brought into R it may need to be cleaned up a bit removing unnecessary columns and other stray characters. The examples here use remote data from other Web sites. If the data is available as a local file then it can be imported in a similar fashion just using filename rather than the URL.
1.library(XML)
2.3.url<-"http://en.wikipedia.org/wiki/List_of_countries_by_population"
4.population = readHTMLTable(url, which=3)
5.population
Or you can use the feature to simple grab XML content. I have found this particularly useful when I need geospatial data and need to get the latitude/longitude of a location (this example uses Open Street Maps API provided by MapQuest). This example obtains the results for the coordinates of the United States White House.
1.url<-"http://open.mapquestapi.com/geocoding/v1/address?location=1600%20Pennsylvania%20Ave,%20Washington,%20DC&outFormat=xml"
2.mygeo <- xmlToDataFrame(url)
3.mygeo$result
An alternate approach is to use a JSON format. I generally find that JSON is a better format and it can be readily used in most programming languages.
1.library(rjson)
2.url<-"http://open.mapquestapi.com/geocoding/v1/address?location=1600%20Pennsylvania%20Ave,%20Washington,%20DC&outFormat=json"
3.4.raw_json <- scan(url,"", sep="\n")
5.6.mygeo <- fromJSON(raw_json)