In this seminar we will report the results of two series of investigations into the L2 listening performance of Hong Kong university students, and suggest using partial transcription as a tool for helping learners improve their ability to recognise words in connected English speech. In the first part of the talk, Richard will present findings from two sets of studies, carried out in the first case at HKUST and in the second case with PolyU, HKU and CUHK students, which are among the first to specifically focus on the spoken word recognition of L2 listeners. Both used transcription to provide insights into the ability of Hong Kong learners of English to catch words in connected speech (news broadcasts and documentaries). Results showed that learners at Intermediate level had worryingly high levels of difficulty: even with relatively slow speech rates, this group of learners consistently recognised at most only three out of every four of the 1,000 most frequent words of English – making successful comprehension extremely unlikely. The results suggest that the ability to recognise words (especially frequent ones) in connected speech is a vital prerequisite for comprehension and that too little attention is paid to developing spoken word recognition skills in Hong Kong. In the second part of the talk, Fun will look at the use of a novel methodology (partial transcription) for investigating online processing problems in L2 listening and challenge the widely held assumption that L2 listening comprehension is an interactive process. She will discuss 1) some of the processing problems experienced by L2 listeners; 2) why partial transcription could improve listeners' inferencing skills and their perception of both segmental and suprasegmental speech features; and 3) how partial transcription can be devised for classroom and self-access use. Practical considerations such as material, transcription gap selection and learners' proficiency will also be discussed.