Monday, November 29, 2010

Week 13 Comments

Comments, away!

http://christyfic.blogspot.com/2010/11/reading-notes-week-13-dec-6-2010.html?showComment=1291048922139#c6914149072124833215

http://guybrariantim.blogspot.com/2010/12/readings-for-1206.html?showComment=1291307964538#c7836299966104257179

Week 13 Reading Notes

Alright everyone, we're almost to the finish line!

No Place to Hide
http://www.noplacetohide.net/
I ran into a 404 Error for this site for a while for some strange reason. I learned after reading the other site that the original link has a space at the end of it, causing the aforementioned 404. So if anyone else relied on the link, either delete the empty space at the end or manually type in the site.

Now, I'm not sure what it is I should be reading here, but from the snippets I've skimmed, it is much like the link below. With the digital revolution, the ways we are all being watched and monitored evolved as well. Correct me if I'm wrong, but didn't Orwell's 1984 and other dystopian works from various authors discuss this in detail before the digital revolution was even a plausible concept?

TIA and Data Mining
Long story short: this is a site with a variety of links and documents with reference as to why the TIA is going through data mining. So the government now has a collection of information on people ("information signature") and is looking for trends to stop crime and terrorist activities. I'm honestly not shocked in the least about the idea, as I've seem similar stories, and with actions such as the Patriot Act, how can you not assume something like this was going on?

Maybe I'm just too familiar with Orwell's 1984 to be shaken by this.


Youtube Link
I ran into an error stating it was taken down due to a copyright claim by Viacom. It also seems I am not the only one who had this error. If there is a new link posted, I will check it. But for now, this is the end result.

Friday, November 26, 2010

Week 12 Muddiest Point

Nothing out of the ordinary again, so no muddiest point this week.

And my apologies for having this so late in the day. Holiday madness and all that.

Happy (belated) Thanksgiving, everyone!

Tuesday, November 23, 2010

Assignment 6 - Website

My Website is now live! It isn't anything special to look at, and I was literally toying around with color schemes for a while just to get something else out there.

My Site.

And that, ladies and gentlemen, concludes my horrible attempt at programming.

Monday, November 22, 2010

Week 12 Reading Notes

I almost can't believe the semester is nearly over. Note how I say "almost" and "nearly." It's not over yet, and as I'm not sure if I'm met my "quota" yet, I am going to continue bringing all of you my thoughts on the reading assignments for class. Then again, I'm not even sure if I would stop, as I enjoy seeing the comments and the discussions.

So, with that, I present you with my thoughts on this week's readings. Enjoy!

Weblogs: Their Use and Application in Science and Technology Libraries

So, after all this time of using a blog to share our thoughts and research for the class, we now get to see research showing that we aren't following the trends of the angst-driven teens swarming the blogosphere.

The fact that the article used the term "blogoshere" caused me to chuckle a bit. Perhaps I've read too much xkcd. . .

Interestingly enough, the article starts off with a history of blogging (starting from the first website, at that), which made me think of Lazslo's Linked when it came to linking the sites and the "birth" of the blogosphere.

Overall, the article brings up a few good points regarding the blogging-method as a viable option for group projects (timestamps, little setup, etc), and for reference (usable for finding info on subjects). I do feel as though there are some edges to using blogs over e-mails, but in practice, it doesn't seem to be as effective in some ways. The students I advice have a blog (on blogger, at that), but only a select few use it, while others want the e-mail, facebook, or face-to-face options. Additionally, using the reference-oriented blogs as an outsider may make one wonder the same thing that comes up when one uses the internet for research: "Is this information valid?"

Just tossing that out there for anyone who wants to roll with a discussion.

Using a wiki to manage a library instruction program: Sharing knowledge to better serve patrons

After slugging through the obligatory "What is a wiki?" and "Here is how you start one" sections, the article makes a shift into what the title states it should cover.

Which, sadly, feels like a reiteration of what a wiki is. After reading the article, I felt as though I went through a recursive IF(WHILE()) loop, where I was told one thing (a wiki is a way of sharing information) and saw something very similar later. In my view, the article just showed that a wiki fills in the in-classroom gaps for this field, but honestly, wouldn't that be the case in ANY classroom environment or field of study?

Creating the academic library folksonomy: Put social tagging to work at your institution

Once again, it is nice to see a few things go full circle, as I've found myself having discussions with my colleagues and most of you here regarding folksonomies and even general metadata in the form of subject tags.

Honestly, I don't have much to say about it. There was little here I haven't run into yet (especially with what was discussed in LIS2000), and the most useful section turned out to be the end, as it consisted of suggestions to approaching this idea in an individual library. A few websites were also suggested, but again, nothing that really impacted me in the least.

I can go on a few tangents involving metadata and folksonomies, but I think I've covered those in previous posts.

How a ragtag band created Wikipedia
Video Link

I cannot view this at the moment, as I cannot view videos from this computer. I'll have to find the time to get this done.

Wednesday, November 17, 2010

Week 11 Reading Notes

Web Search Engines
Thanks to Sarah Denzer, I found the article. Apparently, using the full citation of IEEE Computer threw the entire system of citation linker off.

Reading through these articles gives me a different kind of appreciation for the systems used for crawlers. I was familiar with the concept from a book for LIS2000 (Laszlo's Linked), but I didn't know about the degree of equipment involved. Knowing that it would take some of our better net connections 10 days to do a crawl caused me to do a double take, and seeing the numbers made my head swim.

This is a MUCH different idea than what I've seen from the simple "find it" programs I wrote as an undergrad, as this will find the information, index it, and in a way, learn from it. As I said, new appreciation for what is done and how it is done.

Did anyone else have a similar feeling?

Current Development and Future Trends for the OAI Protocol for Metadata Harvesting
Once again, we’re referencing back to the Dublin Core and other metadata standards to sift through and organize data. While the article does give a few inspiring notes as to how this could come to be with examples of organizations/consortia trying this, I still have to wonder the same as I did before: can this really be done?

I mean, honestly: even with a “standard” set of metadata, how viable will this be? Will we actually have a comprehensive set of usable search terms to actively search the “deep web” (including databases), or are we just going to add more clutter to the already vast amount of data hidden on the Internet?

The Deep Web
Some parts of this article made me think of Laszlo’s book Linked, especially with the early section regarding how sites would often be connected or a crawler would find the data.

Thankfully, the article covered more than that, by offering statistics (which gives me a new found respect for the amount of digital data on the internet, as it is measured in thousands of terabytes) and a comparison of “surface” and “deep” web searches, which also explains why the general “sweep” done by the standard search engines just doesn’t cut it for finding what you really need.

There isn’t much to say about the article beyond definitions and numbers (and the feeling that it was a plug for certain technologies), but it does make me interested to learn what is really out there hidden away in the depths of the web.

Friday, November 12, 2010

Week 10 Muddiest Point

I don't have a muddiest point this week.

Either I'm understanding enough to get by, or I'm missing something. I'm sure I'll find out soon enough!

Monday, November 8, 2010

Week 10 Reading Assignments

Alright folks, now that the dust is settling after the fiasco of the past few weeks, I should be able to get back on track and get back to my usual writing.
At least, I’m hoping so.

Digital Libraries: challenges and influential work.

The article is basically a history of where we’ve come about for digital library resources, including the DLI projects and which universities/institutions took part to get us where we are today.

Personally, I liked the reference to aggregators and how they are impacting how we do our jobs. Even more interesting is the note on how Google is still the team to beat, as Google Scholar is a product commercial companies are trying to replicate. I still don’t think Google is the end-all-be-all that people make it out to be; there are some great ideas there (such as standard combined with full-text metadata), but with my experiences with Google as a whole, I’m not entirely comfortable with the idea of putting this product on the pedestal.

One thing I will agree with: aggregators are a great idea, especially when coupled with the concept of full-text metadata. Maybe I’m living in a dream world and have been blown away by some of the commercial products I’ve had presented at work, but hey, a guy can hope.

Dewey Meets Turing: Librarians, Computer Scientists, and the Digital Libraries Initiative

To begin, the title gave me a chuckle, and the introduction gave me some hope as to what was to be discussed. Comparing the expectations of Librarians and Computer Scientists, and tossing in Publishers to complete the trifecta? Move over soap operas, library science has you beat!

But now to be serious for a few moments: when you consider library science and computer science merging together to work on something, you’d only assume it could be a match made in heaven. Libraries need ways to sort and sift information, computer scientists need better and more efficient ways to do their own research. Sounds like a good idea in general.

The article explains the complications these two groups faced, especially with the Web connecting machines (and therefore, data) in unexpected ways and improvements to technology and the way computers “think.”

This article does bring up some other food for thought that has come about due to the changes in technology and its integration into library services, the biggest one referencing the acceptance of online-only publications. With all of the debate regarding copyright law and open access publishing, I do have to wonder if this medium will come to be the primary method of doing things, and if so, how long until print materials and other “traditional” library resources and services are phased out for digital materials and “capable” computers?

Thankfully, we get to see some glimmer of hope at the end of the article, showing that we librarians aren’t entirely phased out just yet. . .

Institutional Repositories: Essential Infrastructure for Scholarship in the Digital Age
http://www.arl.org/resources/pubs/br/br226/br226ir.shtml

And now we have an article on Institutional Repositories. With this article, I was walking in blind, as I haven't exactly heard the term utilized in the workplace before. The author defines an institutional repository as "a set of services that a university offers to the members of its community for the management and dissemination of digital materials created by the institution and its community members." To me, this basically states it is an archive of things created by the members of the institution (in this case, a college), and allows access to this information by members of the set community. Have a missed something in this?

It would seem this is a step toward institution-sponsored open access, in that the creator (in this case, a faulty member) can update a previously written work or continue the work in that same topic without the time consuming steps of scholarly publications. Correct me if I'm wrong, but don't we need more of this in the academic community, where faculty members who want to write about a topic can do so without the hassle?

Maybe I'm living in a dream world, but I would like to see that come to be.

Sunday, November 7, 2010

Week 9 Muddiest Point

A bit late, I know. I do not have a muddiest point this week.

I also do not have any comments this week. I won't go into the details of what has occurred in my life to cause me to trail so far behind. Hopefully I can get back on track without something else happening sooner rather than later.

Monday, November 1, 2010

Assignment 5 - Koha

The link to my virtual shelf is here: http://upitt01-staff.kwc.kohalibrary.com/cgi-bin/koha/virtualshelves/shelves.pl?viewshelf=73

The list is entitled "Books on Paganism."
My username is still APL24 (easier to find that way), and my full name is listed there.


I thought I would do something a bit different from my usual horribly-nerdy lists of books and go with something to expand someone's horizons, especially since religious diversity has suddenly become an issue in current politics and even among the student population. . .

Enjoy!

Week 9 Reading Assignments: XML

The Brighton University Resource Kit for Students
http://student.brighton.ac.uk/burks_6/

From what is posted here, I must say that I would have loved to have access to this while I was an undergrad, especially while I was programming. Even now, there may be some use to have compilers and programming guides on hand. . .

I'm also a fan of the concept as a whole: open source software to fulfill student needs at the beginning of the year. Who has gone to college and did NOT wish for something like this at the start of the year?

I'll download the ISO after work (if the other assignments don't get in the way) and see how it goes.


A survey of XML standards: Part 1

A basic set of notes on XML standards (which, when you think about it, "standards" in any programming field are loose at best and impossible at worst), including version history, a note on the "flavors of standards," and even external links for training purposes and furthering your knowledge. I liked the way it breaks down individual aspects of XML into chunks for easier degrees of understanding.

My only objection to this is the method of writing. It seems rather drab, much like my old programming books, and some of the information that I should have been picking up seemed to just slide away.

I do wish this actually opened with an explanation of XML. As someone with no applicable experience with this, it would have been nice to know a bit more about it.


Extending Your Markup: An XML Tutorial

By opening up with the comment of XML being simple and can solve "all your problems," this article's writer already caught my attention. Thankfully, this article explains that XML is heavily impacted by HTML and SGML, setting my mind at ease when I start noticing too many similarities in style and questioning if I've started looking at the wrong assignment or somehow accidentally discovered time travel. . .

I also seem a few correlations between XML and C++ regarding the "declarations" (i.e. the prolog) at the beginning of an XML document. Just a note for personal reference, feel free to ignore it.

The rest of the article seems to be a bit more "user friendly" compared to the previous reading assignment. There's still a bit that's over my head (I still feel as though I am not smart enough for programming), but perhaps after I read this a second (or third) time, I will have a better understanding of what is actually being discussed and explained in this and the other articles.

W3 XML Schema Tutorial
http://www.w3schools.com/Schema/default.asp

Once again, we find ourselves back with another tutorial from W3. This time, we get some basic XML notes and explanations, and then we are neck deep in the concepts of a Schema and how to use it.

There is clearly a trend here; explanation, elaboration, hands-on-tutorials, done.
I do like the approach of showing what the "writing" actually looks like, as I personally tend to learn more from having the design in front of me than by just simple reiteration of terms.



Sadly, there really wasn't much I had to say. I am a bit off due to a few things that have occurred, so please bear with me (and my not-to-par writing style) as I move through these things.