Week 5 – What to Consider when Designing and Getting your Webpage on the Internet for all to see

Since there was no reading for week four, I used the time to complete other tasks set by web and submit my coursework. When creating a web page, it seems the site is created for two or more persona’s, by this it means that the first persona is created for the main target audience. The second persona is then created for a second group of people that you may wish to attract to your webpage. Another thing to take into consideration is the real estate and screen resolution. Designers have to pick something that is not only of good quality for a good experience, but also something commonly used by people that looks good on different resolutions or selecting liquid layout. Creators also take into account how balanced things are and how well organised they appear. People like to view well organised webpages and symmetrically balanced pages (elements on either side are balanced around the centre line) rather than those asymmetrically balanced (elements are not balanced around the centre line, one side has more.)

Colour also has a huge impact on user when designing a web site. It is not only important to select colours that go along with the theme of the company but it is also important to create contrast with a good balance. To do this creators normally use a triadic colour scheme or a tetradic colour scheme. These colour schemes have a dramatic impact on the way a user views the webpage, it can signify all sorts of emotions.

In addition graphics and the quality of them and where they appear is important. Obviously the image has to relate to the theme of the website but where the images is positioned is also very important. The type of file and image it is also important depending on how the designer of the website wishes it to look and appear. For example whether the image is vector based or a bitmap for efficiency, or master images so that they can be scaled and rotated. In addition creators may use PNG’s, JPEG’s and GIF’s depending on the suitability of the task, for example designers do not want the computer dithering too much (Dithering being filling in the colours it does not know because there are only two hundred and sixteen web safe colours. ) Typography is also important because aspects such as spacing between lines and letters, can be visually pleasing to users or make it incredibly difficult to read. As a result creators and designers must pick suitable typographies that not only fit the theme but also tempt the target audience into the website.

It is interesting to know that when adding pages to the web, you can purchase a domain name if you want one or are required to have one. However to put pages on the web, you must have a server otherwise others cannot view it. A domain name belongs to one client/ person and can be registered to that person via an accredited website such as http://www.godaddy.com, but only spend around £35 – £50 when purchasing the domain. In addition on the web server there must be enough space for the website you have created, you may already have one but if not you can rent one which is also known as a host. In addition Files Transfer Protocol can be used to enable transferring of large files across the internet between two computers.

Upon reflection this weeks reading was interesting in that I found web sites are not just targeted at one person, they are targeted at many but they have a primary objective to appeal to one specific group. I feel I have also offered some additional material because I read chapter twenty one in the third edition. However in able to improve my work further I need to look at a list of specialist terminology and understand them. This will enable me to use them more frequently and improve my academic writing style. Also I feel the sources I am given I need to evaluate and offer an alternative way of conveying points, rather than taking them at face value.

Cites for this post:

Mcintire, P. 2008. Visual design for the modern web. Berkeley, CA: New Riders.

Niederst Robbins, J. 2007. Learning web design. 4th ed. Beijing: O’Reilly.

Watrall, E. and Siarto, J. 2009. Head first Web design. Beijing: O’Reilly.

Third weeks reading – HTML and the development of HTML5

Reflecting on previous work I already know the difference between CSS and HTML. In this blog the focus is now HTML and its development to version HTML5. HTML stands for hypertext mark-up language, it focuses on the content in the webpage unlike CSS that focuses on the presentation of the webpage. HTML has evolved over years and used by browsers to display a webpage and the content it contains. 

Having read through the material HTML seems difficult when first looked at but can actually be quiet simple. For beginners the basic rules of coding are that once you understand what the names are given to characters, you may begin to code. For example h1 stands for heading one (this is typically the biggest and boldest due to it being used for the main title.) Therefore then you code you must start with a < and after you have added the instructions close with a />. As a result the code will look like this : <h1> Edge Hill University </h1>. All this piece of language does is tell the web browser that this is heading one. 

Although browsers depending on the version of HTML you are using, may be very sensitive to mistakes. For instance HTML is compatible with XML but they are very different. HTML may still display code when capital letters have been used or a back slash instead of a slash, XML simply will not accept these lazy poorly practised mistakes. 

Furthermore HTML has been developed and standardised by W3C. Currently HTML5 has been developed so that it can be used on a range of devices, instead of having different languages or variations of code to be compatible with different devices. It has also been developed to support better graphics like raster-based canvas that allows user to draw 2D and 3D graphics using Javascript into the webpage. It is also able to translate into Braille, so even disabled users can understand it. Google’s Hickson suggests that ‘Development of HTML will just proceed until HTML is dead.’ It would be interesting to see what the next computer language would be that overtakes HTML, if it will be supported by web browsers and what advantages it has compared to HTML. Although critically this may be difficult because HTML is becoming a standard practise and used by most web browsers, therefore most other languages are being faded out and taken over by HTML. 

Upon reflection reading this week was quiet interesting on not only establishing the differences between HTML and CSS but also knowing what each of them are and how they work together. However to improve I must start offering my own materials and extra reading into the blogs, to enable me grasp and even better understanding of HTML and most of its functions and a better in depth knowledge of how it has evolved over time. In addition researching why it has become the most popular language used by web browsers and not the other languages that are currently being phased out or over taken. 

Citing for this post 

Delivery.acm.org. 2013. Untitled. [online] Available at: http://delivery.acm.org/10.1145/2210000/2209256/p16-anthes.pdf?ip= [Accessed: 17 Nov 2013].

IEEE Xplore. 2013. World Wide Web: Whence, Whither, What Next?. [online] Available at: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=486966 [Accessed: 17 Nov 2013].

Niederst Robbins, J. 2007. Learning web design. 4th ed. Beijing: O’Reilly.

Computing History

Recently I have been researching a popular female character to computing, called Grace Murray Hopper. During my research I found that Grace was one of the first woman programmers, during world war two. She worked on the first computer in the navy called Mark I which was fifty one feet long, eight foot tall and eight foot wide. This computer could perform three additions per second and store seventy two words. Compared to technology today that does not seem a lot at all.

Furthermore Hopper also created the computer language of COBOT, this was so different because it was the first language to respond to letters and instead of numbers. She jokingly said she created this because she could not balance her check book. However she had a BA in Mathematics, MA in Physics and Mathematics and PHD in Mathematics, which she had gained at Yale University and Vassar University.

She is also credited highly with the invention of the term ‘bug’ with regards to computers. When working with the computer Mark II in 1945, it stopped functioning properly. Due to Hopper fascination with how things worked and stripping alarms clocks from a young age, she began to look for the problem with the computer. She found there was a moth trapped in one of the relays, in the computer. She carefully removed it and pasted it in the log book. This can now be viewed in a museum along with the notes from that day.

Hopper’s work was so good she became one of the oldest serving in the navy, won numerous awards and became a rear admiral, after working through the ranks. She even came out of retirement to continue working for the navy, as a result she retired very late for people of that period. Today she is still recognised for her work and has inspired woman to pursue a career in computing, as well as being a great example in sociology of how woman have broken the traditional gender roles.

Hopper also spoke about nano seconds, watch the video of the interview with Hopper.

Grace Hopper’s Interview on nanoseconds

Overall I think the task went well, I was able to find not only Hoppers life events but also her relationship with computing and linked in another subject to my writing. I was able to develop my skills of researching a given topic using different search engines and keywords. This time I also selected from the task, a person I was not familiar with in order to widen and add depth to my knowledge. Although I do feel I need to improve my skills when skim reading, this will enable me to select useful webpages faster and leave out any repetitive texts. I also need to use specialist terms more and question the information I am reading, to ensure I have analyse each text properly and gained the most useful information from it.

Web Technologies

During research on my chosen topic of cascading style sheets (CSS), I found they only first started to appear in the nineteen nineties. The aim of CSS is to define the difference between content and the presentation of a website, as well as to give designers of websites complete control over what they design using CSS.

Firstly the difference between CSS and HTML (Hyper text mark-up language) is that, HTML is contained and compatible with CSS because it deals with the content of the website. On the other hand CSS deals with the presentation of the website. As a result due to W3C introducing style sheets as a standard for websites, more and more websites are consisting of CSS and HTML. However not all web browsers can cope with cascading style sheets and so may use a style sheet of their own. CSS also is a lot quicker than HTML because it does not require that the font is set, each time there is text.  Which saves time on copy and pasting or retyping code constantly. (see image for an example of CSS.)

Example of CSS

 Secondly CSS promised a lot of things, such as total control over design for creators of websites when using CSS. However it seems this is not entirely true because the technologists that created the styles sheets, are having an influence on the way the creator of the website using that style sheet is designing it. This is because the creator is using the style in which  the technologists have set out, therefore they do not have complete control over the design and presentation. Although people are now becoming more aware of CSS, as a result will discover they can completely over throw the style sheet they are using. Consequently this will result in the creator of the websites having complete control and fulfilling the role CSS was created for.  Moreover in most other area’s CSS has improved, such as making the difference between presentation and content more defined and making then easier to create, faster download and more compatible with other languages like XML.

Upon reflection of the research carried, I feel the sources I used were very informative and it helped that I used the search engine ‘Dogpile’ as well as ‘Google’.  This allowed me  gain more information of the topic of CSS without going through the same pages repeatedly. On the other hand I do feel I could have expanded the range of research methods I used because I mainly use the internet and websites. Next time I would like to include more than just one book and some journals. I also think I could improve on improve on my general knowledge of computing as a subject, so I can understand some of the more complex reading materials more, for example researching certain terms. I also think I should have done a little research on a few of the topics within the task, to see which one had the most information, instead of picking the one I felt most familiar with. This is because I found it difficult to gain access to good quality information of CSS because most of the websites I read, were very repetitive.


The Second Weeks reading list

Having asked questions last week about how to read these articles properly, I have discovered that the full text behind the abstract is a lot richer in information. As a result this week I read about how the average search on search engines such as Google is 2.3 words long. Furthermore the articles explained how web crawlers gather the information from trillions of web pages each second and how each crawler only does a specific word it is assigned to. Moreover a simple crawler could take days to complete a search, however over time they have improved. For example they only look for relevant information, they do not convey sites with lots of spam and tend to leave out the web pages with the least relevance to the search. I also read how to narrow down searches with quotation marks. Also interestingly when comparing search engines due to the different way in which web crawlers are programmed, they bring up around 85% of unique results and only 3% of the same result was actually on all three websites, they had searched on. It could be because different systems such as Lycos, store the information differently into a database, due to the crawlers programming being slightly different. 

Citing for the posts this week:

Hawking, D. 2006. Web search engines. Part 1. Computer, 39 (6), pp. 86-88. Available from: doi: 10.1109/MC.2006.213 [Accessed: 15 Oct 2013].

Hawking, D. 2006. Web Search Engines: Part 2. Computer, 39 (8), pp. 88-90. Available from: doi: 10.1109/MC.2006.286 [Accessed: 15 Oct 2013].

Mauldin, M. 1997. Lycos: design choices in an Internet search service. IEEE Expert, 12 (1), pp. 8-11. Available from: doi: 10.1109/64.577466 [Accessed: 15 Oct 2013].

Spink, A., Jansen, B., Blakely, C. and Koshman, S. 2006. Overlap Among Major Web Search Engines.Information Technology: New Generations, 2006. ITNG 2006. Third International Conference on, pp. 370-374. Available from: doi: 10.1109/ITNG.2006.105 [Accessed: 15 Oct 2013].

The first weeks books

Having read an abstract by Berners Lee I found it interesting that; the world wide web only started out as a way of people saving information and sending to others. It was only meant for people to access information to work on a project together. However today it is a massive space in which millions of people across the globe communicate. Although due to world wide web becoming so popular, the maintenance and research of the web and different area is a never ending task. Then I read another abstract on Mosaic by Vetter, that explained how Mosaic is one of the most popular graphic oriented browsers and that it is used in PC’s running Microsoft Windows, X Windows and Mcintosh. Furthermore I also read an abstract on how to unlock hidden content on the world wide web on area’s such as medicine and travel.  In addition I have started to read up on what the internet is designed for and found that no one actually owns the web. Instead the web has guidelines monitored by W3C. Furthermore I know what each part of a URL is and started to understand the term HTML and it’s function  as a language on the web.

After reading the mark scheme I need to improve by adding my own material, use specialist terms more frequently and add more depth to my writing. In addition I also need to improve by adding multimedia to my blogs and looking through other people’s blogs in the area of computing and challenging their thoughts. As well as this reading the full PDF file and making notes on them would also help my learning.

Cites for this weeks reading:

Berners-Lee, T. 1996. WWW: past, present, and future. Computer, 29 (10), pp. 69-77. Available from: doi: 10.1109/2.539724 [Accessed: 20 Nov 2013].

Niederst Robbins, J. 2007. Learning web design. 4th ed. Beijing: O’Reilly.

Vetter, R., Spell, C. and Ward, C. 1994. Mosaic and the World Wide Web. Computer, 27 (10), pp. 49-57. Available from: doi: 10.1109/2.318591 [Accessed: 20 Nov 2013].

Weaver, A. 1997. The Internet and the World Wide Web. Industrial Electronics, Control and Instrumentation, 1997. IECON 97. 23rd International Conference on, 4 pp. 1529-1540 vol.4. Available from: doi: 10.1109/IECON.1997.664910 [Accessed: 20 Nov 2013].

starting the computing course at Edge Hill

After a tiring but amazing Freshers week, it’s time to settle into my computing course. I can’t help but feel quiet excited as I write my very FIRST blog.Although at the moment the amount of information is overwhelming, it is also left me wondering what I am actually capable of. I suppose I have the next three years in which to find out! I’ll post each week about the material I am reading and let’s see how many challenges we can overcome.