Lab Report Analysis Final Draft
Crystal Rodwell
Learning How To Write Lab Reports by comparing and contrasting them
Writing For Engineers
Submitted by
Farha Zaman
March 4, 2019
Author Note
Farha Zaman, Department of Engineering, City College of New York.
This paper was written in the intention of understanding how lab reports should be written in the professional world.
Questions concerning this paper should be addressed to Farha Zaman, Department of Engineering, City College of New York.
Contact: [email protected]
Abstract
This paper analyzes three lab reports, all in the topic of robotics and machine learning. These lab reports were found via online databases and are used for the educational purpose of understanding the different components of an official lab report.
New innovations are making their way into the world of technology and engineering. One of these new technologies is machine learning, which scientists and engineers have been enraptured to experiment and learn more about. While researching this topic, I came about three lab reports, “Learning Navigation Behaviors End-to-End with AutoRL,” by Hao-Tien Lewis Chaing, Aleksandra Faust, Marek Fiser and Anthony Francis, “Depth Prediction without the sensors: Leveraging structure for unsupervised learning from Monocular Videos,” by Vincent Casser, Soeren Pirk, Reza Mahjourian, and Anelia Angelova, and last but not least, “Deep Reinforcement Learning for robotic manipulations with Asynchronous Off-Policy Updates,” by Shixiang Gu, Ethan Holly, Timothy Lillicrap, and Sergey Levine. These lab reports explicitly discuss and explain experiments which have advanced what we know and learned about machine learning and auto-learning. In this paper, I will analyze three lab reports, all in the topic of robotics and machine learning. These lab reports were found via online databases and are used and analyzed for the educational purpose of understanding the different components of an official lab report.
“Learning Navigation Behaviors End-to-End with AutoRL,” by Hao-Tien Lewis Chaing, Aleksandra Faust, Marek Fiser, and AnthonFrancis, which I’ll refer to as lab one, focuses on the ability of robots to use what they already know to navigate a certain area or environment. Instead of programming the robot to use a preset program built in that tells it how to get from point A to B, the robot figures out how to get to its destination given the goal and prior knowledge such as things it may encounter, distances and etc. The robot’s task is learning how to get to its goal destination by itself, hence reinforced learning. At the end of the experiment, the engineers came to the conclusion that AutoRL learns high-quality navigation behaviors that we can implement on robots. Even though it is very expensive, P2P and PF end-to-end behaviors display better qualities than the RL with hand-crafted hyperparameters and non-learned baselines. They learn new environments quickly, adapt well, and are robust to noise.
In, “Depth prediction without sensors: leveraging structure for unsupervised learning from monocular videos” by Vincent Casser, Soeren Pirk, Reza Mahjourian, and Anelia Angelova, the second lab report that we will look at and refer to as Lab two, the authors test out if robots can predict distance and scene depth based on what they see, instead of having sensors which can detect these things easily. Through the use of KITTI, the main way to evaluate depth and ego-motion, the authors found that their model outperformed the existing models that also use motion. They came to the conclusion that their method fails at moving objects but otherwise makes a notable difference in quality and quantity. They even created a refinement method to get rid of as many errors as possible and in the future plan to apply that to their new experiments.
In, “Deep Reinforcement Learning for robotic manipulations with Asynchronous Off-Policy Updates,” by Shixiang Gu, Ethan Holly, Timothy Lillipcrap, and Sergey Levine, which I’ll refer to as lab three, the authors demonstrate that a recent deep reinforcement learning algorithm based on off-policy training of deep Q-functions can scale to complex 3D manipulation tasks. Basically, robot function programs can be extended to learn complex manipulation policies from scratch. Using the MuJoCo physics stimulator, they performed a detailed investigation which enabled fast comparisons of design choice. They have different sets of arms doing things like door touching, door pulling, and door pushing. Even though the robots were programmed specifically to do these tasks, they were only able to complete the tasks because of their prior knowledge. In other words, when the robot is given a task, for example opening a door, the robot has to use prior knowledge instilled inside to figure out what a door is and how one should grab something with the shape of a door handle.
The first and foremost component of a lab report, after a title would be an abstract. An abstract usually gives you an overall gist of what is to come in the lab report. Chiang et al. in Lab one and Casser et al. in lab two do a wonderful job constructing a thorough abstract. The authors of lab one thoroughly explain what they’ll teach, what they are addressing and which methods; such as P2P, PF, and RL Parametrization, they will be using to conduct the experiment. Those results show that PF and P2P are respectively 23% and 26% more successful than comparable methods across new environments. This is important as it marks that they found new and better technology for the future, and that makes their abstract stronger as readers are more likely to continue reading knowing that the experiment was a success. Compared to Lab one, Lab two, by Casser et al., is much longer, but covers the same amount of information that lab one contains; respectively in its topic. The results were explained, and they made the point that their model out-performs the state-of-the-art approaches. The authors also explain the practical relevance for their experiment which shows readers why they should care, in addition to providing codes which I think is too specific considering this is an abstract. The Abstract in the third lab, “Deep Reinforcement Learning for robotic manipulations with Asynchronous Off-Policy Updates,” by Shixiang Gu, Ethan Holly, Timothy Lillipcrap, and Sergey Levine, is complicated, as it was even hard for me to understand its language. There are undefined terms which make it hard to imagine what the experiment composes of, but a reader can get the gist that it has to do with machine learning. I don’t like this as it leaves a lot of questions as they go into the other components of the report.
The introduction of a lab gives us background information that may be necessary to understand the context of the experiment, what we already know, and the circumstances that come along with the experiment. This is also where a hypothesis is expected to be located. The introduction in Lab one explains that with the technology we have right now, while there are robots who are robust with navigation in dynamic environments, those robots are specialized and build for each environment that they are placed in. These robots would basically fail if they were placed in a new setting. The introduction also touches on and briefly explains the methods the authors will use, which can seem odd, but is actually useful in understanding what’s going on. Lab two actually doesn’t have an “Intro” section, which is confusing, and completely astrays from the format. After the Abstract, they do have about two long paragraphs giving some background information, and then they dive into previous work and main methods. My best guess would be that this is their introduction section, because it does the job of an introduction, and given that, they should have labeled it. Lab three actually has a very long introduction. This one explains what scientists and engineers already know versus what they are trying to figure out. This perfectly accomplishes the job of an introduction, as it tells the reader what they need to know before understanding the significance of the experiment itself. It describes challenges to the experiment as well as explain again the purpose of this paper which you expect to find in the abstract. Overall, the labs do a decent job in giving context and background about the experiments in their reports, even though it would have been more helpful had they kept up with the format of a lab report.
Methods are a key component of an experiment, as it illustrates what you did and why those were your results. This section of a lab report usually contains materials and steps involved with the experiment. Lab one explains that their methods are modeled with the ‘Partially Observable Markov Decision Process (POMDP)’ but organize the methods into different sections; A, B, and C. This would be fine, however, the authors use the label “A.” as POMDP setup which basically shows how the robots work is being measured. While this is important, it doesn’t fit in a list of “different methods of AutoRL” and may have benefitted as its own section within methods. “B., and C.,” go on to explain different approaches to robot navigation which form a list that “A.” should not be a part of. Besides that, the section is thoroughly explained. The Methods of the second lab report, “Depth prediction without sensors: leveraging structure for unsupervised learning from monocular videos” is divided into Problem setup, algorithm baseline, motion model, imposing object size constraints, and test tune refinement model. It basically shows their step by step process, starting with their algorithms, the base they started with, the model they used and the refined model they later created. They, however, failed to demonstrate how they collected their data, and instead explained their data in the next section. For Lab number three, the section for methods, “Simulated Experiments” explains the methods that the engineers used for the experiment. The authors show which robot arms they used as well as display which actions the arms had to perform. It was divided into two sections: reaching, and pushing/pulling the door, which really helped show the contrast and complexities of those tasks. They did an amazing job modeling their experiment in the methods section as it is clear and concise.
Results are the data in the lab report, because here you have the answer to the question, or the evidence to the hypothesis, that a reader has been looking for. Section A of results in Lab one jumps back and forth between results and methods as it explains what was done in methods and the results after. This is confusing as the reason for a lab report to have different parts is for results and methods to be in two different clear sections. Given that the results of the experiment are presented poorly, the authors of the first lab cannot be credited for their work as their findings are not presented correctly. Compared to Lab one, the Results of the experiment in the second lab is organized effectively. First, it explains the datasets; and then explains each dataset that they came up with while being sure to include lots of tables and images which really help a reader understand their process. In Lab three, instead of results, the authors title the section as, “The real experiment” Here, the simulated experiments are tested in real life, and each part again of the experiment, similar to the methods, is explained again. It sounds repetitive and is unnecessary. Instead, the authors should have considered showing us how well the robots performed and analyzed the different approaches the robots took to completing their tasks. Although all the lab reports had a results section, Lab two did a spectacular job in illuminating the findings of the experiment.
The discussion of a lab report must discuss the results and the process of the experiment. This is usually where the authors explain mistakes, things that didn’t go as planned, or even if they changed their hypothesis. In the first lab, the first sentence of its discussion is, “AutoRL is not sample efficient: it took 12 days to train 1000 agents,” and although that is a good point in the discussion, it should certainly have been mentioned in the methods section… where you’d explicitly state something like the quote above. Instead, the methods sections say, “several days” which is vague. Being vague instead of being specific is hurtful to the quality of the lab report and a reader’s overall understanding of the experiment, especially if they had only read the methods section. The authors otherwise did a great job discussing their experiments as they highlighted that the costly training is definitely worth it for these robots that come with the perk of robustness to noise. They point out the flaw in the P2P, which is its inability to avoid large-scale local minima. Lab two takes a different approach to the discussion. The “discussion” in Lab two, decided to display results and discuss it in the same section. Therefore, although it’s there, it doesn’t follow format. With that in mind, it actually is pretty helpful that the authors decided to combine the discussion and results. Since there are so many datasets, it makes sense to mention them one at a time and discuss them all together rather than go back and forth displaying results and then going back in a different section to explain and discuss each and every dataset. Unlike the other lab reports, Lab three’s authors decide to combine discussion and conclusion. Taking up almost half of a page, Gu et al. decide to restate their purpose and re-explain what they demonstrated. They identified that their method does have some limitations and promise to conduct more experiments to reduce their limitations. With that being said, it really shows a growth mindset and gives the authors credibility for understanding that though successful, their work isn’t finished. All three labs do complete their discussions successfully, even though they all take different approaches.
The conclusion of a lab report is almost a summary, but rather it tells the reader what’s to come next, what the results have proved/disproved and ends the report. Lab one has a very short conclusion, only about a paragraph in which the authors briefly restate their purpose, re-highlight their main findings and admit that their new-found experiments are very expensive. The authors also note what they plan to work on next. Similarly, the conclusion section of Lab two is simple. The authors of Lab two cover the basics of their experiment, they remind the reader that this lab is about monocular depth and ego-motion. They propose an online refinement technique for their future experiments, and they mention two things they would like to do in the future and end it with an acknowledgment. It should be noted that the acknowledgment is a part of conclusions and they only thank one person. As I have mentioned before, Lab three doesn’t have a conclusion, and the authors conclude their lab in the discussion section by discussing the experiment and talking about their future experimenting plans. Although this would normally be frowned upon, the purpose of a conclusion is skillfully achieved. Through their different approaches, all three lab reports close their lab reports by moving on to what they will work on in the future.
The acknowledgments in the first lab seem incomplete. They actually say, “The authors thank J. Chase Kew, Oscar Ramirez, Lydia Tapia, Vincent Vanhoucke, and Chris Harris for helpful discussions.” They should have at least said, “…. for their helpful discussions.” In addition to adding what was the importance of these discussions, how did they help the authors, who are these people? Oddly, in Lab two, the acknowledgment is under the conclusions section and is a mere sentence like Lab one. However, this one is a complete sentence which is more professional and sincere. Speaking of sincere, just like the previous labs, the acknowledgment in Lab three is one sentence, but a sincere and detailed one. I’m surprised that all three lab reports have such short acknowledgments, but this pattern shows that acknowledgments must usually just be a sentence or two.
Lab one, by Chiang et al., has their references in a smaller section after the appendix. Technically it is not its own section because it’s under the appendix. Its numbered 1-40 of all of the references made throughout the entire report. One thing to appreciate about this lab report is that you didn’t have to read a new citation every 3 seconds, instead, there were numbers where a citation would be, and the name of the citation would be found in the References. This is also true for Lab three where instead of full names of a citation it’s a simple number, and a reader could find out what the number means in the references section if they so choose. In the second lab I analyzed, Casser et al. simply list the references, no numbers, but definitely in alphabetical order, which seems helpful if anyone wanted to find a specific citation. All three lab reports follow the basic APA style for referencing a piece of work, where they add the last name of the author, the year that dates it, as well as a link to the reference. The reference sections of these labs clearly give credit to the work that each of these lab reports has used to conduct their experience.
A lot of labs include an appendix as well, where they have things like charts and datasets which they also refer to throughout their lab report. The authors usually include this at the end of the report, and for this instance, only Lab one has an appendix section. In Lab one, there are a lot of references to the appendix. Luckily, everything is thoroughly explained, of course keeping in mind this is for someone who is familiar with these robotic references and this kind of language. Lab number two and Lab number three do not have an appendix, as everything they wanted to show was included with the corresponding section/paragraphs.
Though these labs are all created differently and carry different information, they still follow a very similar format which is predominant in all official Lab reports across the globe. They include an appendix, an introduction, a methods section, results, discussion, conclusion, references, and some varying acknowledgments.
References:
Chaing, H. L., Faust, A., Fiser, M. & Francis, A. ( 2019, February 1). Learning Navigation
Behaviors End-to-End with AutoRL. Retrieved from:
https://arxiv.org/pdf/1809.10124.pdf
Casser, V. Pirk, S. Mahjourian, R. & Angelova, A. (2019). Depth Prediction
without the sensors: Leveraging structure for unsupervised learning from Monocular Videos. Retrived from:
https://arxiv.org/pdf/1610.00633.pdf
Gu, S. Holly, E. Lillipcrap, T & Levine, S. (2016, Novermber 23). Deep Reinforcement Learning
for robotic manipulations with Asynchronous Off-Policy Updates. Retrieved from:
https://arxiv.org/abs/1610.00633
Click here to view the Lab Report Analysis Reflection: